modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/theeconomist | 87b63683ec35c079bdab8a77aba9982cf404aeb6 | 2021-05-23T01:35:15.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/theeconomist | 100 | null | transformers | 4,600 | ---
language: en
thumbnail: https://www.huggingtweets.com/theeconomist/1607116194498/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/879361767914262528/HdRauDM-_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">The Economist 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@theeconomist bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@theeconomist's tweets](https://twitter.com/theeconomist).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3233</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>112</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>1</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3120</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pilbjv0d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theeconomist's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2lt3277j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2lt3277j/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/theeconomist'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
m3hrdadfi/bert2bert-fa-wiki-summary | e9dc167bd34be7161f2a7e7c680c3c5cf7d53de2 | 2020-12-11T21:50:20.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"fa",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | m3hrdadfi | null | m3hrdadfi/bert2bert-fa-wiki-summary | 100 | null | transformers | 4,601 | ---
language: fa
license: apache-2.0
tags:
- summarization
---
A Bert2Bert model on the Wiki Summary dataset to summarize articles. The model achieved an 8.47 ROUGE-2 score.
For more detail, please follow the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary) repo.
## Eval results
The following table summarizes the ROUGE scores obtained by the Bert2Bert model.
| % | Precision | Recall | FMeasure |
|:-------:|:---------:|:------:|:--------:|
| ROUGE-1 | 28.14 | 30.86 | 27.34 |
| ROUGE-2 | 07.12 | 08.47* | 07.10 |
| ROUGE-L | 28.49 | 25.87 | 25.50 |
## Questions?
Post a Github issue on the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary/issues) repo.
|
microsoft/swin-large-patch4-window7-224-in22k | 3a03736addbe3c9ccf022e154193b8776e050135 | 2022-05-16T19:59:30.000Z | [
"pytorch",
"tf",
"swin",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2103.14030",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/swin-large-patch4-window7-224-in22k | 100 | null | transformers | 4,602 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window7-224-in22k")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window7-224-in22k")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
openclimatefix/dgmr-context-conditioning-stack | ec8f1daf261a9d6919a11c8d831c48e69511ce4c | 2022-06-20T08:25:18.000Z | [
"pytorch",
"transformers"
] | null | false | openclimatefix | null | openclimatefix/dgmr-context-conditioning-stack | 100 | null | transformers | 4,603 | Entry not found |
persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing | db4386cf6a360784bc3373a1debd1a046d57244f | 2021-09-23T16:20:00.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"transformers",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing | 100 | null | transformers | 4,604 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
philschmid/vit-base-patch16-224-in21k-image-classification-sagemaker | 45424abd0e49de81fbe657b89f9ae707be28e0ea | 2021-06-09T08:07:35.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"model-index"
] | image-classification | false | philschmid | null | philschmid/vit-base-patch16-224-in21k-image-classification-sagemaker | 100 | null | transformers | 4,605 | ---
tags:
- image-classification
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-image-classification-sagemaker
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-image-classification-sagemaker
This model is a fine-tuned version of [vit-base-patch16-224-in21k](https://huggingface.co/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3033
- Accuracy: 0.972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 313 | 1.4603 | 0.936 |
| 1.6548 | 2.0 | 626 | 0.4451 | 0.966 |
| 1.6548 | 3.0 | 939 | 0.3033 | 0.972 |
### Framework versions
- Transformers 4.6.1
- Pytorch 1.7.1
- Datasets 1.6.2
- Tokenizers 0.10.3
|
thu-coai/LongLM-base | fadb9378d11bb30c3f12f50ebf89bb378d313c57 | 2021-11-24T06:02:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"zh",
"arxiv:2108.12960",
"transformers",
"lm-head",
"autotrain_compatible"
] | text2text-generation | false | thu-coai | null | thu-coai/LongLM-base | 100 | 2 | transformers | 4,606 | ---
language:
- zh
thumbnail: http://coai.cs.tsinghua.edu.cn/coai/img/logo.png?v=13923
tags:
- pytorch
- lm-head
- zh
datasets:
metrics:
widget:
- text: "小咕噜对靳司寒完全是个自来熟,小家伙爬进他怀里小手搂着他的脖子,奶声奶气的要求:“靳蜀黎,你给咕噜讲故事好不好?”讲故事?童话故事吗?“我不会。”小家伙明显不信。嘟着小嘴大眼汪汪的盯着他,“哼。”小家伙轻轻哼了一声,靳司寒默了半晌,<extra_id_1>"
- text: "美女亲自打招呼,这可是破天荒第一次,之前不管他献多少次殷勤,美女<extra_id_1>甩他,难道今天真是老天<extra_id_2>不敢<extra_id_3>的兄连滚带爬的来到<extra_id_4>身边队友都带着艳<extra_id_5>他,<extra_id_6>连计算机系的那票球友都在那儿不住地偷看MAGGIE,这种感觉真<extra_id_7>毙了!"
inference:
parameters:
top_p: 0.9
---
## LongLM
### 1. Parameters
| Versions | $d_m$ | $d_{ff}$ | $d_{kv}$ | $n_h$ | $n_e/n_d$ | \# P |
| ------------ | ----- | -------- | -------- | ----- | --------- | ---- |
| LongLM-small | 512 | 2,048 | 64 | 8 | 6/6 | 60M |
| LongLM-base | 768 | 3,072 | 64 | 12 | 12/12 | 223M |
| LongLM-large | 1,536 | 3,072 | 64 | 12 | 24/32 | 1B |
- $d_m$: the dimension of hidden states
- $d_{ff}$: the dimension of feed forward layers
- $d_{kv}$: the dimension of the keys/values in the self-attention layers
- $n_h$: the number of attention heads
- $n_e$: the number of hidden layers of the encoder
- $n_d$: the number of hidden layers of the decoder
- \#P: the number of parameters
### 2. Pretraining Tasks
Encoder-decoder models are trained typically by maximizing the likelihood of the target output given an input. To improve the capacities of both the encoder and decoder, we propose to train LongLM with two pretraining tasks including text infilling (Raffel et al., 2020) and conditional continuation (Radford et al., 2019). For the first task, the input is a text where a number of spans are sampled and replaced by special tokens with unique IDs, while the output is the spans delimited by the special tokens used in the input. The lengths of masked spans are drawn from a Poisson distribution with λ=3 and all masked tokens compress 15% of the original texts. As for the second task, the input and output are respectively the front and back half of a text, which is split into two parts randomly.
### 3. Pretraining Data
We collect 120G novels as the pretraining data for LongLM.
### 4. Checkpoints
1. **Model Loading:**
```python\
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('LongLM-large')
model = T5ForConditionalGeneration.from_pretrained('LongLM-large')
```
2. **Generation:**
```python
input_ids = tokenizer("小咕噜对,<extra_id_1>",return_tensors="pt", padding=True, truncation=True, max_length=512).input_ids.to(device)
gen = model.generate(input_ids, do_sample=True, decoder_start_token_id=1, top_p=0.9, max_length=512)
```
### 5. Dependencies
```
datasets 1.6.2
deepspeed 0.3.16
huggingface-hub 0.0.8
jieba 0.42.1
jsonlines 2.0.0
nltk 3.5
numpy 1.19.5
pytorch-lightning 1.2.0
regex 2020.11.13
rouge 1.0.1
rouge-score 0.0.4
sacrebleu 1.5.0
scipy 1.5.4
sentencepiece 0.1.95
tokenizers 0.10.1
torch 1.8.1
torchaudio 0.8.0
torchmetrics 0.2.0
torchvision 0.9.0
transformers 4.6.1
```
### 6. Contributers
[Jian Guan](https://jianguanthu.github.io/) at [thu-coai](http://coai.cs.tsinghua.edu.cn/)
## Citation
```txt
@misc{guan2021lot,
title={LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation},
author={Jian Guan and Zhuoer Feng and Yamei Chen and Ruilin He and Xiaoxi Mao and Changjie Fan and Minlie Huang},
year={2021},
eprint={2108.12960},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
uer/roberta-mini-word-chinese-cluecorpussmall | 49b23897e13966cb55c0babf66b6b7453ff1714f | 2022-02-19T15:57:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/roberta-mini-word-chinese-cluecorpussmall | 100 | 1 | transformers | 4,607 | \---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "最近一趟去北京的[MASK]几点发车"
---
# Chinese word-based RoBERTa Miniatures
## Model description
This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
Most Chinese pre-trained weights are based on Chinese character. Compared with character-based models, word-based models are faster (because of shorter sequence length) and have better performance according to our experimental results. To this end, we released the 5 Chinese word-based RoBERTa models of different sizes. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
Notice that the output results of Hosted inference API (right) are not properly displayed. When the predicted word has multiple characters, the single word instead of entire sentence is displayed. One can click **JSON Output** for normal output results.
You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **word-based RoBERTa-Tiny** | [**L=2/H=128 (Tiny)**][2_128] |
| **word-based RoBERTa-Mini** | [**L=4/H=256 (Mini)**][4_256] |
| **word-based RoBERTa-Small** | [**L=4/H=512 (Small)**][4_512] |
| **word-based RoBERTa-Medium** | [**L=8/H=512 (Medium)**][8_512] |
| **word-based RoBERTa-Base** | [**L=12/H=768 (Base)**][12_768] |
Compared with [char-based models](https://huggingface.co/uer/chinese_roberta_L-2_H-128), word-based models achieve better results in most cases. Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny(char) | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| **RoBERTa-Tiny(word)** | **74.3(+2.0)** | **86.4** | **93.2** | **82.0** | **66.4** | **58.2** | **59.6** |
| RoBERTa-Mini(char) | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| **RoBERTa-Mini(word)** | **76.7(+1.0)** | **87.6** | **94.1** | **85.4** | **66.9** | **59.2** | **67.3** |
| RoBERTa-Small(char) | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| **RoBERTa-Small(word)** | **78.1(+1.3)** | **88.5** | **94.7** | **87.4** | **67.6** | **60.9** | **69.8** |
| RoBERTa-Medium(char) | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| **RoBERTa-Medium(word)** | **78.9(+1.1)** | **89.2** | **95.1** | **88.0** | **67.8** | **60.6** | **73.0** |
| RoBERTa-Base(char) | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
| **RoBERTa-Base(word)** | **80.2(+0.7)** | **90.3** | **95.7** | **89.4** | **68.0** | **61.5** | **76.8** |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of word-based RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-medium-word-chinese-cluecorpussmall')
>>> unmasker("[MASK]的首都是北京。")
[
{'sequence': '中国 的首都是北京。',
'score': 0.21525809168815613,
'token': 2873,
'token_str': '中国'},
{'sequence': '北京 的首都是北京。',
'score': 0.15194718539714813,
'token': 9502,
'token_str': '北京'},
{'sequence': '我们 的首都是北京。',
'score': 0.08854265511035919,
'token': 4215,
'token_str': '我们'},
{'sequence': '美国 的首都是北京。',
'score': 0.06808705627918243,
'token': 7810,
'token_str': '美国'},
{'sequence': '日本 的首都是北京。',
'score': 0.06071401759982109,
'token': 7788,
'token_str': '日本'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, BertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFBertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
Since BertTokenizer does not support sentencepiece, AlbertTokenizer is used here.
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:
```
>>> import sentencepiece as spm
>>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
model_prefix='cluecorpussmall_spm',
vocab_size=100000,
max_sentence_length=1024,
max_sentencepiece_length=6,
user_defined_symbols=['[MASK]','[unused1]','[unused2]',
'[unused3]','[unused4]','[unused5]','[unused6]',
'[unused7]','[unused8]','[unused9]','[unused10]'],
pad_id=0,
pad_piece='[PAD]',
unk_id=1,
unk_piece='[UNK]',
bos_id=2,
bos_piece='[CLS]',
eos_id=3,
eos_piece='[SEP]',
train_extremely_large_corpus=True
)
```
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of word-based RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-word-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-word-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-word-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-word-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall |
Finnish-NLP/t5-small-nl24-casing-punctuation-correction | c622037d360a3ab1836607c0c4c22557b3a4843f | 2022-05-22T10:07:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Finnish-NLP | null | Finnish-NLP/t5-small-nl24-casing-punctuation-correction | 100 | null | transformers | 4,608 | Based on Finnish pretrained T5 model version small-nl24
Train data:
Around 300k samples from from following datasets
- [wikipedia](https://huggingface.co/datasets/wikipedia)
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Tested with 1000 samples from the previous datasets Median CER 1.1% MEAN CER 4.2%
More detailed info coming later...
|
nickmuchi/deberta-v3-base-finetuned-finance-text-classification | a90a4ec1eddb5d7d68afac4876fe8b65c76e10e5 | 2022-05-30T12:11:47.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"dataset:financial_phrasebank",
"dataset:Kaggle Self label",
"dataset:nickmuchi/financial-classification",
"transformers",
"generated_from_trainer",
"financial-sentiment-analysis",
"sentiment-analysis",
"sentence_50agree",
"financial",
"stocks",
"sentiment",
"license:mit",
"model-index"
] | text-classification | false | nickmuchi | null | nickmuchi/deberta-v3-base-finetuned-finance-text-classification | 100 | null | transformers | 4,609 | ---
license: mit
tags:
- generated_from_trainer
- financial-sentiment-analysis
- sentiment-analysis
- sentence_50agree
- financial
- stocks
- sentiment
datasets:
- financial_phrasebank
- Kaggle Self label
- nickmuchi/financial-classification
widget:
- text: "The USD rallied by 3% last night as the Fed hiked interest rates"
example_title: "Bullish Sentiment"
- text: "Covid-19 cases have been increasing over the past few months impacting earnings for global firms"
example_title: "Bearish Sentiment"
- text: "the USD has been trending lower"
example_title: "Mildly Bearish Sentiment"
- text: "The USD rallied by 3% last night as the Fed hiked interest rates however, higher interest rates will increase mortgage costs for homeowners"
example_title: "Neutral"
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: deberta-v3-base-finetuned-finance-text-classification
results: []
---
# deberta-v3-base-finetuned-finance-text-classification
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.7687
- Accuracy: 0.8913
- F1: 0.8912
- Precision: 0.8927
- Recall: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.4187 | 0.8399 | 0.8407 | 0.8687 | 0.8399 |
| 0.5002 | 2.0 | 570 | 0.3065 | 0.8755 | 0.8733 | 0.8781 | 0.8755 |
| 0.5002 | 3.0 | 855 | 0.4148 | 0.8775 | 0.8775 | 0.8778 | 0.8775 |
| 0.1937 | 4.0 | 1140 | 0.4249 | 0.8696 | 0.8699 | 0.8719 | 0.8696 |
| 0.1937 | 5.0 | 1425 | 0.5121 | 0.8834 | 0.8824 | 0.8831 | 0.8834 |
| 0.0917 | 6.0 | 1710 | 0.6113 | 0.8775 | 0.8779 | 0.8839 | 0.8775 |
| 0.0917 | 7.0 | 1995 | 0.7296 | 0.8775 | 0.8776 | 0.8793 | 0.8775 |
| 0.0473 | 8.0 | 2280 | 0.7034 | 0.8953 | 0.8942 | 0.8964 | 0.8953 |
| 0.0275 | 9.0 | 2565 | 0.6995 | 0.8834 | 0.8836 | 0.8846 | 0.8834 |
| 0.0275 | 10.0 | 2850 | 0.7736 | 0.8755 | 0.8755 | 0.8789 | 0.8755 |
| 0.0186 | 11.0 | 3135 | 0.7173 | 0.8814 | 0.8814 | 0.8840 | 0.8814 |
| 0.0186 | 12.0 | 3420 | 0.7659 | 0.8854 | 0.8852 | 0.8873 | 0.8854 |
| 0.0113 | 13.0 | 3705 | 0.8415 | 0.8854 | 0.8855 | 0.8907 | 0.8854 |
| 0.0113 | 14.0 | 3990 | 0.7577 | 0.8953 | 0.8951 | 0.8966 | 0.8953 |
| 0.0074 | 15.0 | 4275 | 0.7687 | 0.8913 | 0.8912 | 0.8927 | 0.8913 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adamnik/bert-entailment-detection | f5d7ce4378701cd2931790a0342fea555b3ea91c | 2022-07-21T01:30:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | adamnik | null | adamnik/bert-entailment-detection | 100 | null | transformers | 4,610 | ---
license: mit
---
|
Helsinki-NLP/opus-mt-id-es | 934c40fafa6de947f8e05f05f8e3d25c30bbb744 | 2021-09-09T22:11:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"id",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-id-es | 99 | null | transformers | 4,611 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-id-es
* source languages: id
* target languages: es
* OPUS readme: [id-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.id.es | 21.8 | 0.483 |
|
Salesforce/mixqg-3b | 0db3c8a87cb87cad44a5cc1d2bf05df0a3bddfe4 | 2021-10-18T16:19:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:2110.08175",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Salesforce | null | Salesforce/mixqg-3b | 99 | 4 | transformers | 4,612 | ---
language: en
widget:
- text: Robert Boyle \\n In the late 17th century, Robert Boyle proved that air is necessary for combustion.
---
# MixQG (3b-sized model)
MixQG is a new question generation model pre-trained on a collection of QA datasets with a mix of answer types. It was introduced in the paper [MixQG: Neural Question Generation with Mixed Answer Types](https://arxiv.org/abs/2110.08175) and the associated code is released in [this](https://github.com/salesforce/QGen) repository.
### How to use
Using Huggingface pipeline abstraction:
```
from transformers import pipeline
nlp = pipeline("text2text-generation", model='Salesforce/mixqg-3b', tokenizer='Salesforce/mixqg-3b')
CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion."
ANSWER = "Robert Boyle"
def format_inputs(context: str, answer: str):
return f"{answer} \\n {context}"
text = format_inputs(CONTEXT, ANSWER)
nlp(text)
# should output [{'generated_text': 'Who proved that air is necessary for combustion?'}]
```
Using the pre-trained model directly:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('Salesforce/mixqg-3b')
model = AutoModelForSeq2SeqLM.from_pretrained('Salesforce/mixqg-3b')
CONTEXT = "In the late 17th century, Robert Boyle proved that air is necessary for combustion."
ANSWER = "Robert Boyle"
def format_inputs(context: str, answer: str):
return f"{answer} \\n {context}"
text = format_inputs(CONTEXT, ANSWER)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=32, num_beams=4)
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(output)
# should output "Who proved that air is necessary for combustion?"
```
### Citation
```
@misc{murakhovska2021mixqg,
title={MixQG: Neural Question Generation with Mixed Answer Types},
author={Lidiya Murakhovs'ka and Chien-Sheng Wu and Tong Niu and Wenhao Liu and Caiming Xiong},
year={2021},
eprint={2110.08175},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
THUMT/mGPT | 92dac1dd66b77562f9a8a1fe6a24c35b9368f3f4 | 2021-10-14T05:49:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2110.06609",
"transformers"
] | text-generation | false | THUMT | null | THUMT/mGPT | 99 | 1 | transformers | 4,613 |
# mGPT
mGPT is pre-trained on the [mC4 dataset](https://huggingface.co/datasets/mc4) using a causal language modeling objective. It was introduced in this [paper](https://arxiv.org/abs/2110.06609) and first released on this page.
## Model description
mGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base).
## Intended uses
You can use the raw model for text generation or using prompts for adapting it to a downstream task.
## How to use
You can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import MT5Tokenizer, GPT2LMHeadModel, TextGenerationPipeline
tokenizer = MT5Tokenizer.from_pretrained("THUMT/mGPT")
model = GPT2LMHeadModel.from_pretrained("THUMT/mGPT")
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
text = "Replace me by any text you'd like."
text = pipeline(text, do_sample=True, max_length=1024)[0]["generated_text"]
```
## Preprocessing
The texts are tokenized using `sentencepiece` and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use `<extra_id_0>` to separate lines in a document.
## BibTeX entry and citation info
```bibtex
@misc{tan2021msp,
title={MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators},
author={Zhixing Tan and Xiangwen Zhang and Shuo Wang and Yang Liu},
year={2021},
eprint={2110.06609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-wiki-spm | f727663bc57f94313937dd5d469c2d4424be6e0c | 2021-09-11T09:38:49.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"transformers",
"autotrain_compatible"
] | fill-mask | false | airesearch | null | airesearch/wangchanberta-base-wiki-spm | 99 | null | transformers | 4,614 | ---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-spm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-spm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use subword token trained with [SentencePice](https://github.com/google/sentencepiece) library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ayameRushia/gpt2-small-indonesia-fine-tuning-poem | 39857971a3ec844a0f97b5ff9bdb0eee5f42398a | 2021-08-10T06:50:20.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"id",
"transformers"
] | text-generation | false | ayameRushia | null | ayameRushia/gpt2-small-indonesia-fine-tuning-poem | 99 | 1 | transformers | 4,615 | ---
language: id
widget:
- text: "Wahai rembulan yang tertutup awan hujan"
---
# Indonesian GPT-2 finetuned on Indonesian poems
This is the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian) fine-tuned to Indonesian poems. The dataset can be found in [here](https://huggingface.co/datasets/id_puisi) All training was done on Google Colab Jupyter Notebook (soon).
The dataset is splitted into two subset with details belows:
| split | count (examples) | percentage |
| ---------- | ---------- | -------------- |
| train | 7,358 | 80% |
| validation | 1,890 | 20% |
### Evaluation results
The model evaluation results after 10 epochs are as follows:
| dataset | train/loss | eval/loss | eval perplexity |
| ---------- | ---------- | -------------- | ---------- |
| [id puisi](https://huggingface.co/datasets/id_puisi) | 3.324700 | 3.502665 | 33.20 |
The logs can be found in [wandb page here](https://wandb.ai/ayamerushia/gpt-2_poem/runs/36ymudz9/overview?workspace=user-ayamerushia) or tensorboard [here](https://huggingface.co/ayameRushia/gpt2-small-indonesia-fine-tuning-poem/tensorboard)
|
ethzanalytics/GPT-J-6B-8bit-Convo-D3E | 5a9b2d5f43355d8edaa887463c7e74af6b4554df | 2022-07-20T16:50:51.000Z | [
"pytorch",
"gptj",
"text-generation",
"en",
"dataset:daily_dialog",
"transformers",
"gpt2",
"gpt",
"license:mit"
] | text-generation | false | ethzanalytics | null | ethzanalytics/GPT-J-6B-8bit-Convo-D3E | 99 | 3 | transformers | 4,616 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- daily_dialog
inference: False
---
# GPT-J 6B (8-bit edition) - Daily Dialogues 3 Epoch
> essentially, we combine the workflow presented in the huggingface documentation [here](https://huggingface.co/docs/transformers/training) with the work done by Hivemind in fine-tuning [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) with limited memory. A detailed explanation of how it works can be found in [Hivemind's model card](https://huggingface.co/hivemind/gpt-j-6B-8bit).
- trained for 3 epochs with a batch size of 4 on a modified version of the daily dialogues on a tesla V100
- _note that some of this work is exploratory in nature and subject to being further improved through the Scientific Process_
- Given the above and the fact that the classes for GPT-J are modified in-script, TBD whether it will work with huggingface inference API with the uploaded tokenizer etc.
- Regardless, you can test it out with [this Colaboratory notebook](https://colab.research.google.com/gist/pszemraj/44a263c7bd22d24285b70fcb5717ad4d/gpt-j-6b-8bit-textgen-playground.ipynb)!
_Note: the model was trained with using "person <greek_letter>" as a pseudo-BOS token for usage in the [ai-msgbot](https://github.com/pszemraj/ai-msgbot) project and subsequent training, say, on WhatsApp messages._
* * *
## Examples
### existence
==========Testing Prompt-ID #8 ==========
PROMPT TEXT:
person alpha:
what is the meaning of existence?
person beta:
----------FULL GENERATED TEXT:
person alpha:
what is the meaning of existence?
person beta:
does god exist?
can one know what the creator wants?
if the creator exists, does he need me?
will the creator let me live after I
### "self-aware" named entity recognition
==========Testing Prompt-ID #1 ==========
PROMPT TEXT:
person alpha:
what should I bring to the party?
person beta:
\----------FULL GENERATED TEXT:
person alpha:
what should I bring to the party?
person beta:
don’t bring anything
the person alpha: can I borrow the person beta’s pants?
the friend: sure
person alpha and
* * *
{'LR_scheduler_gamma': 0.6,
'_n_gpu': 1,
'adafactor': False,
'adam_beta1': 0.9,
'adam_beta2': 0.999,
'adam_epsilon': 1e-08,
'bf16': False,
'bf16_full_eval': False,
'configs_src': 'EleutherAI/gpt-j-6B',
'data_tag': 'DailyDialogues',
'dataloader_drop_last': False,
'dataloader_num_workers': 0,
'dataloader_pin_memory': True,
'ddp_bucket_cap_mb': 'None',
'ddp_find_unused_parameters': 'None',
'debug': '[]',
'deepspeed': 'None',
'disable_tqdm': False,
'do_eval': False,
'do_predict': False,
'do_train': False,
'eval_accumulation_steps': 4,
'eval_batch_size': 4,
'eval_steps': 'None',
'evaluation_strategy': 'no',
'fp16': True,
'fp16_backend': 'auto',
'fp16_full_eval': True,
'fp16_opt_level': 'O1',
'gradient_accumulation_steps': 32,
'gradient_checkpointing': True,
'greater_is_better': 'None',
'group_by_length': False,
'half_precision_backend': 'amp',
'hub_model_id': 'gpt-j-6B-8-bits_DS-DailyDialogues_Ep-3_Bs-4',
'hub_strategy': 'every_save',
'hub_token': '<HUB_TOKEN>',
'ignore_data_skip': False,
'label_names': 'None',
'label_smoothing_factor': 0.0,
'learning_rate': 1e-05,
'length_column_name': 'length',
'load_best_model_at_end': False,
'local_rank': -1,
'log_level': -1,
'log_level_replica': -1,
'log_on_each_node': True,
'logging_dir': '/content/logs',
'logging_first_step': False,
'logging_nan_inf_filter': True,
'logging_steps': 500,
'logging_strategy': 'steps',
'lr_scheduler_type': 'linear',
'max_grad_norm': 0.5,
'max_steps': -1,
'metric_for_best_model': 'None',
'model_src': 'hivemind/gpt-j-6B-8bit',
'mp_parameters': '',
'no_cuda': False,
'num_train_epochs': 3,
'output_dir': './checkpoints',
'overwrite_output_dir': True,
'past_index': -1,
'per_device_eval_batch_size': 4,
'per_device_train_batch_size': 4,
'per_gpu_eval_batch_size': 'None',
'per_gpu_train_batch_size': 'None',
'prediction_loss_only': False,
'push_to_hub': True,
'push_to_hub_model_id': 'None',
'push_to_hub_organization': 'None',
'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>',
'remove_unused_columns': True,
'report_to': "['tensorboard']",
'resume_from_checkpoint': 'None',
'run_name': './checkpoints',
'save_on_each_node': False,
'save_steps': 500,
'save_strategy': 'epoch',
'save_total_limit': 2,
'seed': 42,
'sharded_ddp': '[]',
'skip_memory_metrics': True,
'tf32': 'None',
'tpu_metrics_debug': False,
'tpu_num_cores': 'None',
'train_batch_size': 4,
'train_tag': '8-bits',
'use_legacy_prediction_loop': False,
'warmup_ratio': 0.0,
'warmup_steps': 0,
'weight_decay': 0,
'xpu_backend': 'None'}
|
flair/ner-multi-fast | 80ebda976ff428db36b39fe71ec8d06f44bbff24 | 2021-03-02T22:14:04.000Z | [
"pytorch",
"en de nl es",
"dataset:conll2003",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | flair | null | flair/ner-multi-fast | 99 | null | flair | 4,617 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en de nl es
datasets:
- conll2003
widget:
- text: "George Washington ging nach Washington"
---
## 4-Language NER in Flair (English, German, Dutch and Spanish)
This is the fast 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French.
F1-Score: **91,51** (CoNLL-03 English), **85,72** (CoNLL-03 German revised), **86,22** (CoNLL-03 Dutch), **85,78** (CoNLL-03 Spanish)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-multi-fast")
# make example sentence in any of the four languages
sentence = Sentence("George Washington ging nach Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.9977)]
Span [5]: "Washington" [− Labels: LOC (0.9895)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the multi-language corpus
corpus: Corpus = MultiCorpus([
CONLL_03(), # English corpus
CONLL_03_GERMAN(), # German corpus
CONLL_03_DUTCH(), # Dutch corpus
CONLL_03_SPANISH(), # Spanish corpus
])
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# GloVe embeddings
WordEmbeddings('glove'),
# FastText embeddings
WordEmbeddings('de'),
# contextual string embeddings, forward
FlairEmbeddings('multi-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('multi-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-multi-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following papers when using this model.
```
@misc{akbik2019multilingual,
title={Multilingual sequence labeling with one model},
author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland}
booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop},
year = {2019}
}
```
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
|
kssteven/ibert-roberta-large-mnli | 5ec852d6567202390f5bcc558de70e8d23ea7d10 | 2021-05-10T05:35:32.000Z | [
"pytorch",
"ibert",
"text-classification",
"transformers"
] | text-classification | false | kssteven | null | kssteven/ibert-roberta-large-mnli | 99 | null | transformers | 4,618 | Entry not found |
nasa-impact/bert-e-base-mlm | 8afbc45239b08b6aebf6c0f44dc61cf3cf98af95 | 2022-02-24T01:08:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nasa-impact | null | nasa-impact/bert-e-base-mlm | 99 | 4 | transformers | 4,619 | This model is further trained on top of scibert-base using masked language modeling loss (MLM). The corpus is roughly 270,000 earth science-based publications.
The tokenizer used is AutoTokenizer, which is trained on the same corpus.
Stay tuned for further downstream task tests and updates to the model.
in the works
- MLM + NSP task loss
- Add more data sources for training
- Test using downstream tasks
|
persiannlp/mt5-small-parsinlu-sentiment-analysis | 8114ac41a91370f89b03c6578963158f6451a412 | 2021-09-23T16:20:41.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-sentiment-analysis | 99 | null | transformers | 4,620 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-small-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
rathi/storyGenerator | f12db5032e8ac67d55a6c7cd803fc23ac73690da | 2021-05-23T12:11:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rathi | null | rathi/storyGenerator | 99 | null | transformers | 4,621 | ## This is a genre-based Movie plot generator.
For best results, structure the input as follows -
1. Add a `<BOS>` tag in the start.
2. Add a `<genre>` tag (with the genre as a placeholder for lowercased genres such as `<action>`, `<romantic>`, `<thriller>`, `<comedy>` |
smmzhu/DialoGPT-small-SZ | c769886f0effb62b2b2f7ae9d346e1a38f4307a2 | 2022-02-14T20:25:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | smmzhu | null | smmzhu/DialoGPT-small-SZ | 99 | null | transformers | 4,622 | ---
tags:
- conversational
--- |
tals/albert-base-vitaminc | 74617d45d0e1e67c0ec806f893b5bea7dbaab394 | 2022-06-22T23:56:01.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-base-vitaminc | 99 | null | transformers | 4,623 | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
yoshitomo-matsubara/bert-base-uncased-qnli | 1d297c67f59ada25497fe8b1ce802101ed8e0bdd | 2021-05-29T21:49:44.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qnli",
"transformers",
"qnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-qnli | 99 | null | transformers | 4,624 | ---
language: en
tags:
- bert
- qnli
- glue
- torchdistill
license: apache-2.0
datasets:
- qnli
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on QNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
josu/albert-pt-br | ba4eb261c8d8f565b33192ff1ab69ee34f71b807 | 2022-03-09T03:41:04.000Z | [
"pytorch",
"albert",
"fill-mask",
"pt",
"transformers",
"portuguese",
"brazil",
"pt_BR",
"autotrain_compatible"
] | fill-mask | false | josu | null | josu/albert-pt-br | 99 | null | transformers | 4,625 | ---
language: pt
tags:
- portuguese
- brazil
- pt_BR
widget:
- text: Marte está no [MASK] solar.
---
``` python
from transformers import pipeline, AlbertTokenizer, AlbertForMaskedLM
model = AlbertForMaskedLM.from_pretrained('josu/albert-pt-br')
tokenizer = AlbertTokenizer.from_pretrained('josu/albert-pt-br')
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer ,device=0)
text = 'Marte está no [MASK] solar.'
unmasker(text)
[{'score': 0.7004144191741943,
'token': 244,
'token_str': 'sistema',
'sequence': 'marte esta no sistema solar.'},
{'score': 0.02539917267858982,
'token': 4077,
'token_str': 'solar',
'sequence': 'marte esta no solar solar.'},
{'score': 0.020301498472690582,
'token': 49,
'token_str': 'seu',
'sequence': 'marte esta no seu solar.'},
{'score': 0.01753508299589157,
'token': 482,
'token_str': 'centro',
'sequence': 'marte esta no centro solar.'},
{'score': 0.013344300910830498,
'token': 1401,
'token_str': 'plano',
'sequence': 'marte esta no plano solar.'}]
``` |
IsaacBot/t5-small-finetuned-qa-google-en-answer_v1 | 37b218baa4c1eb237192024fef0f977d172cc9b8 | 2022-06-27T15:42:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | IsaacBot | null | IsaacBot/t5-small-finetuned-qa-google-en-answer_v1 | 99 | null | transformers | 4,626 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-qa-google-en_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-qa-google-en_v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1413
- Rouge1: 40.2873
- Rouge2: 30.6667
- Rougel: 40.1625
- Rougelsum: 40.2529
- Gen Len: 6.352
## Model description
Model finetuned to generate answers based on an input paragraph.
Example input:
> extract answer: \<hl\> The most recent major version of Python is Python 3, which we shall be using in this tutorial. \<hl\> However, Python 2, although not being updated with anything other than security updates, is still quite popular.
## Intended uses & limitations
More information needed
## Training and evaluation data
Model trained on Google Natural Questions in this repo: https://huggingface.co/datasets/IsaacBot/Natural-Questions-Sentence-Highlight. The dataset allows training for question and/or answer generation. This model was trained to generate answer based on an input paragraph (no question hint).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.27 | 100 | 2.3595 | 33.588 | 25.61 | 33.5609 | 33.6388 | 6.141 |
| No log | 0.53 | 200 | 2.2774 | 34.5528 | 26.0686 | 34.5097 | 34.58 | 6.3135 |
| No log | 0.8 | 300 | 2.2435 | 36.1641 | 27.857 | 36.134 | 36.1435 | 6.5255 |
| No log | 1.06 | 400 | 2.2117 | 36.7651 | 28.2274 | 36.7426 | 36.7278 | 6.3635 |
| 2.3999 | 1.33 | 500 | 2.1987 | 38.0006 | 29.1016 | 37.9354 | 37.9785 | 6.485 |
| 2.3999 | 1.6 | 600 | 2.1846 | 37.978 | 28.9293 | 37.8811 | 37.9299 | 6.321 |
| 2.3999 | 1.86 | 700 | 2.1782 | 38.6531 | 29.582 | 38.5923 | 38.5797 | 6.484 |
| 2.3999 | 2.13 | 800 | 2.1726 | 38.8536 | 29.9533 | 38.7785 | 38.8452 | 6.537 |
| 2.3999 | 2.39 | 900 | 2.1640 | 38.7099 | 29.9414 | 38.6318 | 38.6736 | 6.4635 |
| 2.2365 | 2.66 | 1000 | 2.1563 | 39.2126 | 30.0471 | 39.1838 | 39.1994 | 6.4135 |
| 2.2365 | 2.93 | 1100 | 2.1579 | 39.6397 | 30.5926 | 39.5747 | 39.6701 | 6.4395 |
| 2.2365 | 3.19 | 1200 | 2.1525 | 39.3201 | 30.1897 | 39.2362 | 39.3391 | 6.4305 |
| 2.2365 | 3.46 | 1300 | 2.1514 | 39.4479 | 30.2987 | 39.3595 | 39.374 | 6.311 |
| 2.2365 | 3.72 | 1400 | 2.1478 | 39.7449 | 30.6635 | 39.6403 | 39.6993 | 6.372 |
| 2.1979 | 3.99 | 1500 | 2.1453 | 39.7789 | 30.4461 | 39.6917 | 39.8043 | 6.4215 |
| 2.1979 | 4.26 | 1600 | 2.1427 | 39.8127 | 30.3727 | 39.6894 | 39.7522 | 6.3165 |
| 2.1979 | 4.52 | 1700 | 2.1426 | 40.3472 | 30.7014 | 40.2174 | 40.3229 | 6.452 |
| 2.1979 | 4.79 | 1800 | 2.1413 | 40.2873 | 30.6667 | 40.1625 | 40.2529 | 6.352 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
flyswot/test | 0f65eb6e89b2638c1f072d6f9a98f573d9b7627b | 2022-06-15T17:22:55.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | flyswot | null | flyswot/test | 99 | null | transformers | 4,627 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
model-index:
- name: test
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: F1
type: f1
value: 0.12404601272248332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2724
- F1: 0.1240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.001
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.0 | 1 | 2.2724 | 0.1240 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Lamia/DialoGPT-small-Sundrop | 416ed82b4214213967365835640e50b68ee6e138 | 2022-07-11T17:18:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Lamia | null | Lamia/DialoGPT-small-Sundrop | 99 | null | transformers | 4,628 | ---
tags:
- conversational
---
# Sundrop DialoGPT Model |
nakamura196/roberta-small-hi-char-mlm | cb64e954b4991d9aa093cdcf46d5ece78b1303e6 | 2022-07-22T00:10:42.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | nakamura196 | null | nakamura196/roberta-small-hi-char-mlm | 99 | 1 | transformers | 4,629 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "入[MASK]外無之候江戸大水又ハ大地震なと"
- text: "日[MASK]守御望之由可令披露候"
---
# roberta-small-hi-char-mlm
## Model Description
This is a RoBERTa model pre-trained on HI texts with character tokenizer.
This uses `is_decoder=False` option.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nakamura196/roberta-small-hi-char-mlm")
model=AutoModelForMaskedLM.from_pretrained("nakamura196/roberta-small-hi-char-mlm")
```
|
bigscience/distill-bloom-1b3-10x | 4a8f3997eda7b62f6ab43808954b98861f1b83a9 | 2022-07-18T08:58:46.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"transformers",
"license:bigscience-bloom-rail-1.0",
"text-generation"
] | text-generation | false | bigscience | null | bigscience/distill-bloom-1b3-10x | 99 | null | transformers | 4,630 | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
# <span style="color:red"><b>WARNING:</b> This is an <b>intermediary checkpoint</b> and WIP project. It is not fully trained yet. You might want to use [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) if you want a model that has completed training. This model is a distilled version of [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) (10x distillation) </span>
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 18.Jul.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 138 million parameters:
* 12 layers, 4 attention heads
* Hidden layers are 512-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
_In progress._
Current training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/)
- Checkpoint size:
- Bf16 weights: 329GB
- Full checkpoint with optimizer states: 2.3TB
- Training throughput: About 150 TFLOP per GPU per second
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Estimated end: 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay
|
DeepChem/ChemBERTa-10M-MLM | a5cfe173103cff3149e7322130342a4880010cba | 2022-01-20T18:01:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DeepChem | null | DeepChem/ChemBERTa-10M-MLM | 98 | null | transformers | 4,631 | Entry not found |
Helsinki-NLP/opus-mt-es-ru | 332aa8549e185c507579275be71156665765de5e | 2021-09-09T21:44:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ru | 98 | null | transformers | 4,632 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ru
* source languages: es
* target languages: ru
* OPUS readme: [es-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.es.ru | 20.9 | 0.489 |
| newstest2013.es.ru | 23.4 | 0.504 |
| Tatoeba.es.ru | 47.0 | 0.657 |
|
Helsinki-NLP/opus-mt-ru-ar | ab04b35320965fbf22f7d9a9f40c9d96677f978a | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-ar | 98 | null | transformers | 4,633 | ---
language:
- ru
- ar
tags:
- translation
license: apache-2.0
---
### rus-ara
* source group: Russian
* target group: Arabic
* OPUS readme: [rus-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md)
* model: transformer
* source language(s): rus
* target language(s): apc ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.ara | 16.6 | 0.486 |
### System Info:
- hf_name: rus-ara
- source_languages: rus
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'ar']
- src_constituents: {'rus'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt
- src_alpha3: rus
- tgt_alpha3: ara
- short_pair: ru-ar
- chrF2_score: 0.486
- bleu: 16.6
- brevity_penalty: 0.9690000000000001
- ref_len: 18878.0
- src_name: Russian
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: ru
- tgt_alpha2: ar
- prefer_old: False
- long_pair: rus-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KoboldAI/GPT-Neo-125M-AID | e110966ae0510c56e863ce76f45526b0791b4394 | 2022-04-29T14:48:16.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | KoboldAI | null | KoboldAI/GPT-Neo-125M-AID | 98 | 1 | transformers | 4,634 | # GPT-Neo-125M-AID
This model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size.
Because of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models. |
ahmedrachid/FinancialBERT | 1ab60da3548fe8c9a5a5dd4af0d3ee490cfd3191 | 2022-02-07T15:00:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ahmedrachid | null | ahmedrachid/FinancialBERT | 98 | 3 | transformers | 4,635 | ---
language: en
widget:
- text: Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise.
- text: Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron
- text: U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week.
tags:
- fill-mask
---
**FinancialBERT** is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- *TRC2-financial*: 1.8M news articles that were published by Reuters between 2008 and 2010.
- *Bloomberg News*: 400,000 articles between 2006 and 2013.
- *Corporate Reports*: 192,000 transcripts (10-K & 10-Q)
- *Earning Calls*: 42,156 documents.
More details on `FinancialBERT` can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
allenai/ivila-block-layoutlm-finetuned-grotoap2 | e28b0964d21162676949e10aaaf6c50f0b861398 | 2021-09-27T23:32:43.000Z | [
"pytorch",
"layoutlm",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | allenai | null | allenai/ivila-block-layoutlm-finetuned-grotoap2 | 98 | null | transformers | 4,636 | Entry not found |
ceyda/wav2vec2-base-760-turkish | 59cd959c640f72ca15dcdda760ceb30cf1ee1304 | 2021-07-06T00:16:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ceyda | null | ceyda/wav2vec2-base-760-turkish | 98 | 2 | transformers | 4,637 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-Base Turkish by Ceyda Cinarel
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 22.60
---
# Wav2Vec2-Base-760-Turkish
# TBA
Pretrained Turkish model [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760). Fine-tuned on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-base-960-turkish")
model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-base-960-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-base-960-turkish")
model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-base-960-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
#Attention mask is not used because the base-model was not trained with it. reference: https://github.com/huggingface/transformers/blob/403d530eec105c0e229fc2b754afdf77a4439def/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L305
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids,skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Results**:
- WER: 22.602390
- CER: 6.054137
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/cceyda/wav2vec2) |
dbmdz/flair-distilbert-ner-germeval14 | fc9ea2e8b6f76aa33997be09df8ab5b6fe89a73a | 2021-03-02T18:32:30.000Z | [
"pytorch",
"de",
"dataset:germeval_14",
"flair",
"token-classification",
"sequence-tagger-model",
"license:mit"
] | token-classification | false | dbmdz | null | dbmdz/flair-distilbert-ner-germeval14 | 98 | 1 | flair | 4,638 | ---
datasets:
- germeval_14
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
widget:
- text: "Hugging Face ist eine französische Firma mit Sitz in New York."
license: mit
---
# Flair NER model trained on GermEval14 dataset
This model was trained on the official [GermEval14](https://sites.google.com/site/germeval2014ner/data)
dataset using the [Flair](https://github.com/flairNLP/flair) framework.
It uses a fine-tuned German DistilBERT model from [here](https://huggingface.co/distilbert-base-german-cased).
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Run 4 | Run 5 | Avg.
| ------------- | ----- | ----- | --------- | ----- | ----- | ----
| Development | 87.05 | 86.52 | **87.34** | 86.85 | 86.46 | 86.84
| Test | 85.43 | 85.88 | 85.72 | 85.47 | 85.62 | 85.62
† denotes that this model is selected for upload.
# Flair Fine-Tuning
We used the following script to fine-tune the model on the GermEval14 dataset:
```python
from argparse import ArgumentParser
import torch, flair
# dataset, model and embedding imports
from flair.datasets import GERMEVAL_14
from flair.embeddings import TransformerWordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
if __name__ == "__main__":
# All arguments that can be passed
parser = ArgumentParser()
parser.add_argument("-s", "--seeds", nargs='+', type=int, default='42') # pass list of seeds for experiments
parser.add_argument("-c", "--cuda", type=int, default=0, help="CUDA device") # which cuda device to use
parser.add_argument("-m", "--model", type=str, help="Model name (such as Hugging Face model hub name")
# Parse experimental arguments
args = parser.parse_args()
# use cuda device as passed
flair.device = f'cuda:{str(args.cuda)}'
# for each passed seed, do one experimental run
for seed in args.seeds:
flair.set_seed(seed)
# model
hf_model = args.model
# initialize embeddings
embeddings = TransformerWordEmbeddings(
model=hf_model,
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=False,
respect_document_boundaries=False,
)
# select dataset depending on which language variable is passed
corpus = GERMEVAL_14()
# make the dictionary of tags to predict
tag_dictionary = corpus.make_tag_dictionary('ner')
# init bare-bones sequence tagger (no reprojection, LSTM or CRF)
tagger: SequenceTagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# init the model trainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# make string for output folder
output_folder = f"flert-ner-{hf_model}-{seed}"
# train with XLM parameters (AdamW, 20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train(
output_folder,
learning_rate=5.0e-5,
mini_batch_size=16,
mini_batch_chunk_size=1,
max_epochs=10,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
train_with_dev=False,
)
```
|
facebook/xglm-4.5B | 19523cf39b8f6f61232e9aa4191fa9473b398bff | 2022-02-15T01:32:08.000Z | [
"pytorch",
"xglm",
"text-generation",
"arxiv:2112.10668",
"transformers",
"license:mit"
] | text-generation | false | facebook | null | facebook/xglm-4.5B | 98 | 2 | transformers | 4,639 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
inference: false
---
# XGLM-4.5B
XGLM-4.5B is a multilingual autoregressive language model (with 4.5 billion parameters) trained on a balanced corpus of a diverse set of 134 languages. It was introduced in the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin\*, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li\* (\*Equal Contribution). The original implementation was released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/xglm).
## Model card
For intended usage of the model, please refer to the [model card](https://github.com/pytorch/fairseq/blob/main/examples/xglm/model_card.md) released by the XGLM-4.5B development team.
## Example (COPA)
The following snippet shows how to evaluate our models (GPT-3 style, zero-shot) on the Choice of Plausible Alternatives (COPA) task, using examples in English, Chinese and Hindi.
```python
import torch
import torch.nn.functional as F
from transformers import XGLMTokenizer, XGLMForCausalLM
tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-4.5B")
model = XGLMForCausalLM.from_pretrained("facebook/xglm-4.5B")
data_samples = {
'en': [
{
"premise": "I wanted to conserve energy.",
"choice1": "I swept the floor in the unoccupied room.",
"choice2": "I shut off the light in the unoccupied room.",
"question": "effect",
"label": "1"
},
{
"premise": "The flame on the candle went out.",
"choice1": "I blew on the wick.",
"choice2": "I put a match to the wick.",
"question": "cause",
"label": "0"
}
],
'zh': [
{
"premise": "我想节约能源。",
"choice1": "我在空着的房间里扫了地板。",
"choice2": "我把空房间里的灯关了。",
"question": "effect",
"label": "1"
},
{
"premise": "蜡烛上的火焰熄灭了。",
"choice1": "我吹灭了灯芯。",
"choice2": "我把一根火柴放在灯芯上。",
"question": "cause",
"label": "0"
}
],
'hi': [
{
"premise": "M te vle konsève enèji.",
"choice1": "Mwen te fin baleye chanm lib la.",
"choice2": "Mwen te femen limyè nan chanm lib la.",
"question": "effect",
"label": "1"
},
{
"premise": "Flam bouji a te etenn.",
"choice1": "Mwen te soufle bouji a.",
"choice2": "Mwen te limen mèch bouji a.",
"question": "cause",
"label": "0"
}
]
}
def get_logprobs(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
input_ids, output_ids = inputs["input_ids"], inputs["input_ids"][:, 1:]
outputs = model(**inputs, labels=input_ids)
logits = outputs.logits
logprobs = torch.gather(F.log_softmax(logits, dim=2), 2, output_ids.unsqueeze(2))
return logprobs
# Zero-shot evaluation for the Choice of Plausible Alternatives (COPA) task.
# A return value of 0 indicates that the first alternative is more plausible,
# while 1 indicates that the second alternative is more plausible.
def COPA_eval(prompt, alternative1, alternative2):
lprob1 = get_logprobs(prompt + "\n" + alternative1).sum()
lprob2 = get_logprobs(prompt + "\n" + alternative2).sum()
return 0 if lprob1 > lprob2 else 1
for lang in data_samples_long:
for idx, example in enumerate(data_samples_long[lang]):
predict = COPA_eval(example["premise"], example["choice1"], example["choice2"])
print(f'{lang}-{idx}', predict, example['label'])
# en-0 1 1
# en-1 0 0
# zh-0 1 1
# zh-1 0 0
# hi-0 1 1
# hi-1 0 0
``` |
nielsr/beit-large-patch16-224-pt22k-ft22k | 6ae4dfaec1a310c4d0d69b1accf747854b1f9632 | 2021-08-03T15:49:41.000Z | [
"pytorch",
"beit",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | nielsr | null | nielsr/beit-large-patch16-224-pt22k-ft22k | 98 | null | transformers | 4,640 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
- imagenet-21k
---
# BEiT (large-sized model, fine-tuned on ImageNet-22k)
BEiT (BERT pre-training of Image Transformers) model pre-trained in a self-supervised way on ImageNet-22k (14 million images, 21,841 classes) at resolution 224x224, and also fine-tuned on the same dataset at the same resolution. It was introduced in the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. |
snunlp/KR-ELECTRA-generator | 9b3389a4aabcb1abb4fa72b60ad5239e5e68bed3 | 2022-05-04T06:24:04.000Z | [
"pytorch",
"electra",
"fill-mask",
"ko",
"transformers",
"autotrain_compatible"
] | fill-mask | false | snunlp | null | snunlp/KR-ELECTRA-generator | 98 | null | transformers | 4,641 | ---
language:
- "ko"
---
## KoRean based ELECTRA (KR-ELECTRA)
This is a release of a Korean-specific ELECTRA model with comparable or better performances developed by the Computational Linguistics Lab at Seoul National University. Our model shows remarkable performances on tasks related to informal texts such as review documents, while still showing comparable results on other kinds of tasks.
### Released Model
We pre-trained our KR-ELECTRA model following a base-scale model of [ELECTRA](https://github.com/google-research/electra). We trained the model based on Tensorflow-v1 using a v3-8 TPU of Google Cloud Platform.
#### Model Details
We followed the training parameters of the base-scale model of [ELECTRA](https://github.com/google-research/electra).
##### Hyperparameters
| model | # of layers | embedding size | hidden size | # of heads |
| ------: | ----------: | -------------: | ----------: | ---------: |
| Discriminator | 12 | 768 | 768 | 12 |
| Generator | 12 | 768 | 256 | 4 |
##### Pretraining
| batch size | train steps | learning rates | max sequence length | generator size |
| ---------: | ----------: | -------------: | ------------------: | -------------: |
| 256 | 700000 | 2e-4 | 128 | 0.33333 |
#### Training Dataset
34GB Korean texts including Wikipedia documents, news articles, legal texts, news comments, product reviews, and so on. These texts are balanced, consisting of the same ratios of written and spoken data.
#### Vocabulary
vocab size 30,000
We used morpheme-based unit tokens for our vocabulary based on the [Mecab-Ko](https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/) morpheme analyzer.
#### Download Link
* Tensorflow-v1 model ([download](https://drive.google.com/file/d/1L_yKEDaXM_yDLwHm5QrXAncQZiMN3BBU/view?usp=sharing))
* PyTorch models on HuggingFace
```python
from transformers import ElectraModel, ElectraTokenizer
model = ElectraModel.from_pretrained("snunlp/KR-ELECTRA-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("snunlp/KR-ELECTRA-discriminator")
```
### Finetuning
We used and slightly edited the finetuning codes from [KoELECTRA](https://github.com/monologg/KoELECTRA), with additionally adjusted hyperparameters. You can download the codes and config files that we used for our model from our [github](https://github.com/snunlp/KR-ELECTRA).
#### Experimental Results
| | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) |
| :-------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: |
| KoBERT | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 |
| XLM-Roberta-Base | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 |
| HanBERT | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 |
| KoELECTRA-Base | 90.33 | 87.18 | 81.70 | 80.64 | 82.00 | 93.54 | 60.86 / 89.28 | 66.09 |
| KoELECTRA-Base-v2 | 89.56 | 87.16 | 80.70 | 80.72 | 82.30 | 94.85 | 84.01 / 92.40 | 67.45 |
| KoELECTRA-Base-v3 | 90.63 | **88.11** | **84.45** | 82.24 | **85.53** | 95.25 | 84.83 / **93.45** | 67.61 |
| **KR-ELECTRA (ours)** | **91.168** | 87.90 | 82.05 | **82.51** | 85.41 | **95.51** | **84.93** / 93.04 | **74.50** |
The baseline results are brought from [KoELECTRA](https://github.com/monologg/KoELECTRA)'s.
### Citation
```bibtex
@misc{kr-electra,
author = {Lee, Sangah and Hyopil Shin},
title = {KR-ELECTRA: a KoRean-based ELECTRA model},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snunlp/KR-ELECTRA}}
}
```
|
speechbrain/sepformer-wham-enhancement | 2f9717e80979502f27ef8f542e54f0e84fa17a90 | 2022-06-30T23:14:06.000Z | [
"en",
"dataset:WHAM!",
"arxiv:2010.13154",
"arxiv:2106.04624",
"speechbrain",
"audio-to-audio",
"Speech Enhancement",
"WHAM!",
"SepFormer",
"Transformer",
"pytorch",
"license:apache-2.0"
] | audio-to-audio | false | speechbrain | null | speechbrain/sepformer-wham-enhancement | 98 | 1 | speechbrain | 4,642 | ---
language: "en"
thumbnail:
tags:
- audio-to-audio
- Speech Enhancement
- WHAM!
- SepFormer
- Transformer
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- WHAM!
metrics:
- SI-SNR
- PESQ
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on WHAM! for speech enhancement (8k sampling frequency)
This repository provides all the necessary tools to perform speech enhancement (denoising) with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAM!](http://wham.whisper.ai/) dataset with 8k sampling frequency, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation in 8k. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance is 14.35 dB SI-SNR on the test set of WHAM! dataset.
| Release | Test-Set SI-SNR | Test-Set PESQ |
|:-------------:|:--------------:|:--------------:|
| 01-12-21 | 14.35 | 3.07 |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
### Perform speech enhancement on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-wham-enhancement", savedir='pretrained_models/sepformer-wham-enhancement')
# for custom file, change path
est_sources = model.separate_file(path='speechbrain/sepformer-wham-enhancement/example_wham.wav')
torchaudio.save("enhanced_wham.wav", est_sources[:, :, 0].detach().cpu(), 8000)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The training script is currently being worked on an ongoing pull-request.
We will update the model card as soon as the PR is merged.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1bbQvaiN-R79M697NnekA7Rr0jIYtO6e3).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ |
thinhda/chatbot | 4106752f7334a91863d710c11e43257145b48caf | 2021-09-19T07:07:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | thinhda | null | thinhda/chatbot | 98 | 1 | transformers | 4,643 | ---
tags:
- conversational
---
# Joey from Friends |
vuiseng9/bert-base-uncased-squad | 067490e92a98e7cbdd5761e842572fcd78189763 | 2022-01-08T18:08:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | vuiseng9 | null | vuiseng9/bert-base-uncased-squad | 98 | null | transformers | 4,644 | This model is developed with transformers v4.10.3.
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-base-uncased-squad
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_eval \
--do_train \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--doc_stride 128 \
--max_seq_length 384 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--eval_steps 250 \
--save_steps 2500 \
--logging_steps 1 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-uncased-squad
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-uncased-squad \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
wukevin/tcr-bert-mlm-only | 8518235d6f14b462d78f15369e0bb65dc4449026 | 2021-11-22T08:32:41.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | wukevin | null | wukevin/tcr-bert-mlm-only | 98 | null | transformers | 4,645 | Pretrained on:
* Masked amino acid modeling
Please see our [main model](https://huggingface.co/wukevin/tcr-bert) for additional details. |
hf-internal-testing/test-opus-tatoeba-fi-en-v2 | 96a4d4666ebcb5b4f03173c1a80b253d7df5ec6f | 2022-03-10T17:25:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hf-internal-testing | null | hf-internal-testing/test-opus-tatoeba-fi-en-v2 | 98 | null | transformers | 4,646 | Entry not found |
IIC/mt5-spanish-mlsum | 6d40c985bdba270bac92242106e6f6c884e3ad2c | 2022-04-02T15:09:23.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"dataset:mlsum",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | IIC | null | IIC/mt5-spanish-mlsum | 98 | 2 | transformers | 4,647 | ---
language:
- es
tags:
- summarization
license: apache-2.0
datasets:
- mlsum
metrics:
- rouge1
- rouge2
- rougeL
- rougeLsum
model-index:
- name: xprophetnet-spanish-mlsum
results:
- task:
type: summarization
name: abstractive summarization
dataset:
type: mlsum
name: mlsum-es
args: es
metrics:
- type: rouge1
value: 21.9788
name: rouge1
- type: rouge2
value: 6.5249
name: rouge2
- type: rougeL
value: 17.7444
name: rougeL
- type: rougeLsum
value: 18.9783
name: rougeLsum
---
This is a model for text summarization in Spanish. It has been trained on the Spanish portion of [mlsum](https://huggingface.co/datasets/mlsum), finetuning the [mt5-base model](https://huggingface.co/google/mt5-base).
We used the following set of hyperparameters:
```python
{
"learning_rate": 2e-5,
"num_train_epochs": 8,
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
"gradient_accumulation_steps": 256,
"fp16": False,
"weight_decay": 0.01,
}
```
The model was finetuned to predict the concatenation of the title and the summary of each item in the dataset. The results that we show below correspond to the set split of mlsum. The metrics for the **concatenation of titles and summaries** are:
```json
{'rouge1': 26.946, 'rouge2': 10.7271, 'rougeL': 21.4591, 'rougeLsum': 24.5001, 'gen_len': 18.9628}
```
On the other hand, the metrics for **just the summaries** are:
```json
{'rouge1': 21.9788, 'rouge2': 6.5249, 'rougeL': 17.7444, 'rougeLsum': 18.9783, 'gen_len': 18.9628}
```
This model is really easy to use, and with the following lines of code you can just start summarizing your documents in Spanish:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
text = "Hola esto es un ejemplo de texto a resumir. Poco hay que resumir aquí, pero es sólo de muestra."
model_str = "IIC/mt5-spanish-mlsum"
tokenizer = AutoTokenizer.from_pretrained(model_str)
model = AutoModelForSeq2SeqLM.from_pretrained(model_str)
input_ids = tokenizer(text, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
castorini/monot5-3b-msmarco-10k | e12dd6847a8e26c0e9e85b204acfee20365455dd | 2022-03-28T15:17:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/monot5-3b-msmarco-10k | 98 | null | transformers | 4,648 | This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
|
kamalkraj/bert-base-cased-ner-conll2003 | 9e08afd817857207cdc8a0e740a60ec471665952 | 2022-04-24T14:51:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | kamalkraj | null | kamalkraj/bert-base-cased-ner-conll2003 | 98 | null | transformers | 4,649 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-ner-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9438052359513089
- name: Recall
type: recall
value: 0.9525412319084483
- name: F1
type: f1
value: 0.9481531116508919
- name: Accuracy
type: accuracy
value: 0.9910634321093416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner-conll2003
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0355
- Precision: 0.9438
- Recall: 0.9525
- F1: 0.9482
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DedsecurityAI/dpt-125mb | c0f6720dc015b3b0746816537bd51026ae323633 | 2022-05-28T04:30:09.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | DedsecurityAI | null | DedsecurityAI/dpt-125mb | 98 | null | transformers | 4,650 | ---
license: mit
---
# How to use
```python
from transformers import pipeline
generator = pipeline('text-generation', model="DedsecurityAI/dpt-125mb")
generator("Hello Simon")
[{'generated_text': 'Hello Simon :) Welcome aboard aboard :) :) :) :) :) :) :) :) :) :) :) :) :) :)'}]
``` |
CenIA/albert-tiny-spanish | 6af62f921e5c06d542f24fc0353aa3a766395dbe | 2022-04-28T19:54:10.000Z | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | false | CenIA | null | CenIA/albert-tiny-spanish | 97 | 1 | transformers | 4,651 | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT Tiny Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.00125
- Batch Size: 2048
- Warmup ratio: 0.0125
- Warmup steps: 125000
- Goal steps: 10000000
- Total steps: 8300000
- Total training time (aprox): 58.2 days
## Training loss
 |
Helsinki-NLP/opus-mt-ar-it | e191529799738628f0f16cf657a456205bebde18 | 2021-01-18T07:47:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-it | 97 | null | transformers | 4,652 | ---
language:
- ar
- it
tags:
- translation
license: apache-2.0
---
### ara-ita
* source group: Arabic
* target group: Italian
* OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md)
* model: transformer
* source language(s): ara
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.ita | 44.2 | 0.658 |
### System Info:
- hf_name: ara-ita
- source_languages: ara
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'it']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: ita
- short_pair: ar-it
- chrF2_score: 0.6579999999999999
- bleu: 44.2
- brevity_penalty: 0.9890000000000001
- ref_len: 1495.0
- src_name: Arabic
- tgt_name: Italian
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: it
- prefer_old: False
- long_pair: ara-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KETI-AIR/ke-t5-large-ko | f1d299906d0a2e9106316d89861c13918936fe82 | 2021-06-23T02:54:27.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-large-ko | 97 | null | transformers | 4,653 | Entry not found |
LeoCordoba/mt5-small-cc-news-es-titles | 515533bdc69e8acaf6bf8b45057ae9f3f5c6ca6e | 2021-09-08T17:03:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"dataset:LeoCordoba/CC-NEWS-ES-titles",
"transformers",
"summarization",
"spanish",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | LeoCordoba | null | LeoCordoba/mt5-small-cc-news-es-titles | 97 | null | transformers | 4,654 | ---
language: es
tags:
- summarization
- mt5
- spanish
license: apache-2.0
datasets:
- LeoCordoba/CC-NEWS-ES-titles
model-index:
- name: mt5-small-ccnews-titles-es
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "CCNEWS-ES-titles"
type: LeoCordoba/CC-NEWS-ES-titles
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 22.6623
- name: Validation ROGUE-2
type: rogue-2
value: 7.7894
- name: Validation ROGUE-L
type: rogue-l
value: 19.8015
- name: Validation ROGUE-Lsum
type: rogue-lsum
value: 19.8092
- name: Test ROGUE-1
type: rogue-1
value: 22.9263
- name: Test ROGUE-2
type: rogue-2
value: 7.9146
- name: Test ROGUE-L
type: rogue-l
value: 20.0272
- name: Test ROGUE-Lsum
type: rogue-lsum
value: 20.0387
widget:
- text: "La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña."
---
## Hyperparameters
{
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 128
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-ccnews-titles-es")
summarizer(article, min_length=5, max_length=64)
```
## Results
| metric | score |
| --- | ----- |
| eval_loss | 2.879085063934326 |
| eval_rouge1 | 22.6623 |
| eval_rouge2 | 7.7894 |
| eval_rougeL | 19.8015, |
| eval_rougeLsum | 19.8092 |
| eval_gen_len | 17.1839 |
| test_loss | 2.878429412841797 |
| test_rouge1 | 22.9263 |
| test_rouge2 | 7.9146 |
| test_rougeL | 20.0272 |
| test_rougeLsum | 20.0387 |
| test_gen_len | 17.1696 | |
Mary222/GPT2_RU_GAME | ed7c7df0e3032b7a4b184397a301bceba6da6927 | 2021-11-04T15:58:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers"
] | text-generation | false | Mary222 | null | Mary222/GPT2_RU_GAME | 97 | null | transformers | 4,655 | ---
language: ru
tags:
- text-generation
---
# GPT2 - RUS |
Qishuai/distilbert_punctuator_zh | 0bed7e402b9d9936697d34ac198c69fc8a41f163 | 2021-12-13T15:05:45.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Qishuai | null | Qishuai/distilbert_punctuator_zh | 97 | 2 | transformers | 4,656 | # Punctuator for Simplified Chinese
The model is fine-tuned based on `DistilBertForTokenClassification` for adding punctuations to plain text (simplified Chinese). The model is fine-tuned based on distilled model `bert-base-chinese`.
## Usage
```python
from transformers import DistilBertForTokenClassification, DistilBertTokenizerFast
model = DistilBertForTokenClassification.from_pretrained("Qishuai/distilbert_punctuator_zh")
tokenizer = DistilBertTokenizerFast.from_pretrained("Qishuai/distilbert_punctuator_zh")
```
## Model Overview
### Training data
Combination of following three dataset:
- News articles of People's Daily 2014. [Reference](https://github.com/InsaneLife/ChineseNLPCorpus)
### Model Performance
- Validation with MSRA training dataset. [Reference](https://github.com/InsaneLife/ChineseNLPCorpus/tree/master/NER/MSRA)
- Metrics Report:
| | precision | recall | f1-score | support |
|:----------------:|:---------:|:------:|:--------:|:-------:|
| C_COMMA | 0.67 | 0.59 | 0.63 | 91566 |
| C_DUNHAO | 0.50 | 0.37 | 0.42 | 21013 |
| C_EXLAMATIONMARK | 0.23 | 0.06 | 0.09 | 399 |
| C_PERIOD | 0.84 | 0.99 | 0.91 | 44258 |
| C_QUESTIONMARK | 0.00 | 1.00 | 0.00 | 0 |
| micro avg | 0.71 | 0.67 | 0.69 | 157236 |
| macro avg | 0.45 | 0.60 | 0.41 | 157236 |
| weighted avg | 0.69 | 0.67 | 0.68 | 157236 |
|
blanchefort/rubert-base-cased-sentiment-rurewiews | fa298cae2c62473004c347cabd9a44379d795383 | 2021-05-19T13:02:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"ru",
"dataset:RuReviews",
"transformers",
"sentiment"
] | text-classification | false | blanchefort | null | blanchefort/rubert-base-cased-sentiment-rurewiews | 97 | null | transformers | 4,657 | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuReviews
---
# RuBERT for Sentiment Analysis of Product Reviews
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuReviews](https://github.com/sismetanin/rureviews).
## Labels
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuReviews](https://github.com/sismetanin/rureviews)**
> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
|
flax-community/gpt-neo-125M-code-clippy-dedup | b4ba7c8b7a505b17b27e9740d128244b184ac07e | 2021-07-26T14:07:29.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"arxiv:2107.03374",
"transformers"
] | text-generation | false | flax-community | null | flax-community/gpt-neo-125M-code-clippy-dedup | 97 | null | transformers | 4,658 | # GPT-Neo-125M-Code-Clippy-Dedup
> **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
## Model Description
PT-Neo-125M-Code-Clippy-Dedup is a [GPT-Neo-125M model](https://huggingface.co/EleutherAI/gpt-neo-125M) finetuned using causal language modeling on our deduplicated version of the Code Clippy Data dataset, which was scraped from public Github repositories (more information in the provided link). This model is specialized to autocomplete methods in multiple programming languages.
## Training data
[Code Clippy Data dataset](https://huggingface.co/datasets/code_search_net).
## Training procedure
In this model's training we tried to stabilize the training by limiting the types of files we were using to train to only those that contained file extensions for popular programming languages as our dataset contains other types of files as well such as `.txt` or project configuration files. We used the following extensions to filter by:
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_streaming_filter_flax.py).
```bash
./run_clm_streaming_filter_flax.py \
--output_dir $HOME/gpt-neo-125M-code-clippy-dedup \
--model_name_or_path="EleutherAI/gpt-neo-125M" \
--dataset_name $HOME/gpt-code-clippy/data_processing/code_clippy_filter.py \
--data_dir $HOME/code_clippy_data/code_clippy_dedup_data \
--text_column_name="text" \
--do_train --do_eval \
--block_size="2048" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="16" \
--preprocessing_num_workers="8" \
--learning_rate="1e-4" \
--max_steps 100000 \
--warmup_steps 2000 \
--decay_steps 30000 \
--adam_beta1="0.9" \
--adam_beta2="0.95" \
--weight_decay="0.1" \
--overwrite_output_dir \
--logging_steps="25" \
--eval_steps="500" \
--push_to_hub="False" \
--report_to="all" \
--dtype="bfloat16" \
--skip_memory_metrics="True" \
--save_steps="500" \
--save_total_limit 10 \
--gradient_accumulation_steps 16 \
--report_to="wandb" \
--run_name="gpt-neo-125M-code-clippy-dedup-filtered-no-resize-2048bs" \
--max_eval_samples 2000 \
--save_optimizer true
```
## Intended Use and Limitations
The model is finetuned text file from github repositories (mostly programming languages but also markdown and other project related files).
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-code-clippy-dedup")
prompt = """def greet(name):
'''A function to greet user. Given a user name it should say hello'''
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed on the datase this model is trained on. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, this model may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, this model may be able to be used to generate malicious code on purpose in order to perform ransomware or other such attacks.
4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any code generated with this model may be required to obey license terms that align with the software it was trained on such as GPL-3.0. It is unclear the legal ramifications of using a language model trained on this dataset.
5. **Biases:** The programming languages most represented in the dataset this model was trained on are Javascript and Python. Therefore, other, still popular languages such as C and C++, are less represented and therefore the models performance for these languages will be less comparatively. Additionally, this dataset only contains public repositories and so the model may not generate code that is representative of code written by private developers. No filtering was performed for potential racist, offensive, or otherwise inappropriate content. Therefore, this model may reflect such biases in its generation.
GPT-Neo-125M-Code-Clippy-Dedup is finetuned from GPT-Neo and might have inherited biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon... |
laxya007/gpt2_till10 | d22959e62a589876dbeab822fe3a9c895788e0d4 | 2021-05-23T08:21:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_till10 | 97 | null | transformers | 4,659 | Entry not found |
manandey/wav2vec2-large-xlsr-punjabi | 31bea48ebd8a57713e0bd55bf5863bc5d30341c2 | 2022-03-25T16:54:20.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | manandey | null | manandey/wav2vec2-large-xlsr-punjabi | 97 | null | transformers | 4,660 | ---
language: pa-IN
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Punjabi by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pa-IN
type: common_voice
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 57.31
---
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-punjabi")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-punjabi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-punjabi")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-punjabi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.31%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
pucpr/clinicalnerpt-procedure | 2bd68cf3aa4f49fb4bd3e88771ac9a7f30ed44fa | 2021-10-13T09:32:04.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-procedure | 97 | 4 | transformers | 4,661 | ---
language: "pt"
widget:
- text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI."
- text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Procedure
The Procedure NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
sebastian-hofstaetter/idcm-distilbert-msmarco_doc | 4d1a03b94f38099cc927aa4ce4a1d3c40ea4e1b4 | 2021-05-26T14:14:50.000Z | [
"pytorch",
"IDCM",
"en",
"dataset:ms_marco",
"arxiv:2105.09816",
"transformers",
"document-retrieval",
"knowledge-distillation"
] | null | false | sebastian-hofstaetter | null | sebastian-hofstaetter/idcm-distilbert-msmarco_doc | 97 | 1 | transformers | 4,662 | ---
language: "en"
tags:
- document-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# Intra-Document Cascading (IDCM)
We provide a retrieval trained IDCM model. Our model is trained on MSMARCO-Document with up to 2000 tokens.
This instance can be used to **re-rank a candidate set** of long documents. The base BERT architecure is a 6-layer DistilBERT.
If you want to know more about our intra document cascading model & training procedure using knowledge distillation check out our paper: https://arxiv.org/abs/2105.09816 🎉
For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/intra-document-cascade
## Configuration
- Trained with fp16 mixed precision
- We select the top 4 windows of size (50 + 2*7 overlap words) with our fast CK model and score them with BERT
- The published code here is only usable for inference (we removed the training code)
## Model Code
````python
from transformers import AutoTokenizer,AutoModel, PreTrainedModel,PretrainedConfig
from typing import Dict
import torch
from torch import nn as nn
class IDCM_InferenceOnly(PreTrainedModel):
'''
IDCM is a neural re-ranking model for long documents, it creates an intra-document cascade between a fast (CK) and a slow module (BERT_Cat)
This code is only usable for inference (we removed the training mechanism for simplicity)
'''
config_class = IDCM_Config
base_model_prefix = "bert_model"
def __init__(self,
cfg) -> None:
super().__init__(cfg)
#
# bert - scoring
#
if isinstance(cfg.bert_model, str):
self.bert_model = AutoModel.from_pretrained(cfg.bert_model)
else:
self.bert_model = cfg.bert_model
#
# final scoring (combination of bert scores)
#
self._classification_layer = torch.nn.Linear(self.bert_model.config.hidden_size, 1)
self.top_k_chunks = cfg.top_k_chunks
self.top_k_scoring = nn.Parameter(torch.full([1,self.top_k_chunks], 1, dtype=torch.float32, requires_grad=True))
#
# local self attention
#
self.padding_idx= cfg.padding_idx
self.chunk_size = cfg.chunk_size
self.overlap = cfg.overlap
self.extended_chunk_size = self.chunk_size + 2 * self.overlap
#
# sampling stuff
#
self.sample_n = cfg.sample_n
self.sample_context = cfg.sample_context
if self.sample_context == "ck":
i = 3
self.sample_cnn3 = nn.Sequential(
nn.ConstantPad1d((0,i - 1), 0),
nn.Conv1d(kernel_size=i, in_channels=self.bert_model.config.dim, out_channels=self.bert_model.config.dim),
nn.ReLU()
)
elif self.sample_context == "ck-small":
i = 3
self.sample_projector = nn.Linear(self.bert_model.config.dim,384)
self.sample_cnn3 = nn.Sequential(
nn.ConstantPad1d((0,i - 1), 0),
nn.Conv1d(kernel_size=i, in_channels=384, out_channels=128),
nn.ReLU()
)
self.sampling_binweights = nn.Linear(11, 1, bias=True)
torch.nn.init.uniform_(self.sampling_binweights.weight, -0.01, 0.01)
self.kernel_alpha_scaler = nn.Parameter(torch.full([1,1,11], 1, dtype=torch.float32, requires_grad=True))
self.register_buffer("mu",nn.Parameter(torch.tensor([1.0, 0.9, 0.7, 0.5, 0.3, 0.1, -0.1, -0.3, -0.5, -0.7, -0.9]), requires_grad=False).view(1, 1, 1, -1))
self.register_buffer("sigma", nn.Parameter(torch.tensor([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]), requires_grad=False).view(1, 1, 1, -1))
def forward(self,
query: Dict[str, torch.LongTensor],
document: Dict[str, torch.LongTensor],
use_fp16:bool = True,
output_secondary_output: bool = False):
#
# patch up documents - local self attention
#
document_ids = document["input_ids"][:,1:]
if document_ids.shape[1] > self.overlap:
needed_padding = self.extended_chunk_size - (((document_ids.shape[1]) % self.chunk_size) - self.overlap)
else:
needed_padding = self.extended_chunk_size - self.overlap - document_ids.shape[1]
orig_doc_len = document_ids.shape[1]
document_ids = nn.functional.pad(document_ids,(self.overlap, needed_padding),value=self.padding_idx)
chunked_ids = document_ids.unfold(1,self.extended_chunk_size,self.chunk_size)
batch_size = chunked_ids.shape[0]
chunk_pieces = chunked_ids.shape[1]
chunked_ids_unrolled=chunked_ids.reshape(-1,self.extended_chunk_size)
packed_indices = (chunked_ids_unrolled[:,self.overlap:-self.overlap] != self.padding_idx).any(-1)
orig_packed_indices = packed_indices.clone()
ids_packed = chunked_ids_unrolled[packed_indices]
mask_packed = (ids_packed != self.padding_idx)
total_chunks=chunked_ids_unrolled.shape[0]
packed_query_ids = query["input_ids"].unsqueeze(1).expand(-1,chunk_pieces,-1).reshape(-1,query["input_ids"].shape[1])[packed_indices]
packed_query_mask = query["attention_mask"].unsqueeze(1).expand(-1,chunk_pieces,-1).reshape(-1,query["attention_mask"].shape[1])[packed_indices]
#
# sampling
#
if self.sample_n > -1:
#
# ck learned matches
#
if self.sample_context == "ck-small":
query_ctx = torch.nn.functional.normalize(self.sample_cnn3(self.sample_projector(self.bert_model.embeddings(packed_query_ids).detach()).transpose(1,2)).transpose(1, 2),p=2,dim=-1)
document_ctx = torch.nn.functional.normalize(self.sample_cnn3(self.sample_projector(self.bert_model.embeddings(ids_packed).detach()).transpose(1,2)).transpose(1, 2),p=2,dim=-1)
elif self.sample_context == "ck":
query_ctx = torch.nn.functional.normalize(self.sample_cnn3((self.bert_model.embeddings(packed_query_ids).detach()).transpose(1,2)).transpose(1, 2),p=2,dim=-1)
document_ctx = torch.nn.functional.normalize(self.sample_cnn3((self.bert_model.embeddings(ids_packed).detach()).transpose(1,2)).transpose(1, 2),p=2,dim=-1)
else:
qe = self.tk_projector(self.bert_model.embeddings(packed_query_ids).detach())
de = self.tk_projector(self.bert_model.embeddings(ids_packed).detach())
query_ctx = self.tk_contextualizer(qe.transpose(1,0),src_key_padding_mask=~packed_query_mask.bool()).transpose(1,0)
document_ctx = self.tk_contextualizer(de.transpose(1,0),src_key_padding_mask=~mask_packed.bool()).transpose(1,0)
query_ctx = torch.nn.functional.normalize(query_ctx,p=2,dim=-1)
document_ctx= torch.nn.functional.normalize(document_ctx,p=2,dim=-1)
cosine_matrix = torch.bmm(query_ctx,document_ctx.transpose(-1, -2)).unsqueeze(-1)
kernel_activations = torch.exp(- torch.pow(cosine_matrix - self.mu, 2) / (2 * torch.pow(self.sigma, 2))) * mask_packed.unsqueeze(-1).unsqueeze(1)
kernel_res = torch.log(torch.clamp(torch.sum(kernel_activations, 2) * self.kernel_alpha_scaler, min=1e-4)) * packed_query_mask.unsqueeze(-1)
packed_patch_scores = self.sampling_binweights(torch.sum(kernel_res, 1))
sampling_scores_per_doc = torch.zeros((total_chunks,1), dtype=packed_patch_scores.dtype, layout=packed_patch_scores.layout, device=packed_patch_scores.device)
sampling_scores_per_doc[packed_indices] = packed_patch_scores
sampling_scores_per_doc = sampling_scores_per_doc.reshape(batch_size,-1,)
sampling_scores_per_doc_orig = sampling_scores_per_doc.clone()
sampling_scores_per_doc[sampling_scores_per_doc == 0] = -9000
sampling_sorted = sampling_scores_per_doc.sort(descending=True)
sampled_indices = sampling_sorted.indices + torch.arange(0,sampling_scores_per_doc.shape[0]*sampling_scores_per_doc.shape[1],sampling_scores_per_doc.shape[1],device=sampling_scores_per_doc.device).unsqueeze(-1)
sampled_indices = sampled_indices[:,:self.sample_n]
sampled_indices_mask = torch.zeros_like(packed_indices).scatter(0, sampled_indices.reshape(-1), 1)
# pack indices
packed_indices = sampled_indices_mask * packed_indices
packed_query_ids = query["input_ids"].unsqueeze(1).expand(-1,chunk_pieces,-1).reshape(-1,query["input_ids"].shape[1])[packed_indices]
packed_query_mask = query["attention_mask"].unsqueeze(1).expand(-1,chunk_pieces,-1).reshape(-1,query["attention_mask"].shape[1])[packed_indices]
ids_packed = chunked_ids_unrolled[packed_indices]
mask_packed = (ids_packed != self.padding_idx)
#
# expensive bert scores
#
bert_vecs = self.forward_representation(torch.cat([packed_query_ids,ids_packed],dim=1),torch.cat([packed_query_mask,mask_packed],dim=1))
packed_patch_scores = self._classification_layer(bert_vecs)
scores_per_doc = torch.zeros((total_chunks,1), dtype=packed_patch_scores.dtype, layout=packed_patch_scores.layout, device=packed_patch_scores.device)
scores_per_doc[packed_indices] = packed_patch_scores
scores_per_doc = scores_per_doc.reshape(batch_size,-1,)
scores_per_doc_orig = scores_per_doc.clone()
scores_per_doc_orig_sorter = scores_per_doc.clone()
if self.sample_n > -1:
scores_per_doc = scores_per_doc * sampled_indices_mask.view(batch_size,-1)
#
# aggregate bert scores
#
if scores_per_doc.shape[1] < self.top_k_chunks:
scores_per_doc = nn.functional.pad(scores_per_doc,(0, self.top_k_chunks - scores_per_doc.shape[1]))
scores_per_doc[scores_per_doc == 0] = -9000
scores_per_doc_orig_sorter[scores_per_doc_orig_sorter == 0] = -9000
score = torch.sort(scores_per_doc,descending=True,dim=-1).values
score[score <= -8900] = 0
score = (score[:,:self.top_k_chunks] * self.top_k_scoring).sum(dim=1)
if self.sample_n == -1:
if output_secondary_output:
return score,{
"packed_indices": orig_packed_indices.view(batch_size,-1),
"bert_scores":scores_per_doc_orig
}
else:
return score,scores_per_doc_orig
else:
if output_secondary_output:
return score,scores_per_doc_orig,{
"score": score,
"packed_indices": orig_packed_indices.view(batch_size,-1),
"sampling_scores":sampling_scores_per_doc_orig,
"bert_scores":scores_per_doc_orig
}
return score
def forward_representation(self, ids,mask,type_ids=None) -> Dict[str, torch.Tensor]:
if self.bert_model.base_model_prefix == 'distilbert': # diff input / output
pooled = self.bert_model(input_ids=ids,
attention_mask=mask)[0][:,0,:]
elif self.bert_model.base_model_prefix == 'longformer':
_, pooled = self.bert_model(input_ids=ids,
attention_mask=mask.long(),
global_attention_mask = ((1-ids)*mask).long())
elif self.bert_model.base_model_prefix == 'roberta': # no token type ids
_, pooled = self.bert_model(input_ids=ids,
attention_mask=mask)
else:
_, pooled = self.bert_model(input_ids=ids,
token_type_ids=type_ids,
attention_mask=mask)
return pooled
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # honestly not sure if that is the best way to go, but it works :)
model = IDCM_InferenceOnly.from_pretrained("sebastian-hofstaetter/idcm-distilbert-msmarco_doc")
````
## Effectiveness on MSMARCO Passage & TREC Deep Learning '19
We trained our model on the MSMARCO-Document collection. We trained the selection module CK with knowledge distillation from the stronger BERT model.
For re-ranking we used the top-100 BM25 results. The throughput of IDCM should be ~600 documents with max 2000 tokens per second.
### MSMARCO-Document-DEV
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .252 | .311 |
| **IDCM** | .380 | .446 |
### TREC-DL'19 (Document Task)
For MRR we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers.
| | MRR@10 | NDCG@10 |
|----------------------------------|--------|---------|
| BM25 | .661 | .488 |
| **IDCM** | .916 | .688 |
For more metrics, baselines, info and analysis, please see the paper: https://arxiv.org/abs/2105.09816
## Limitations & Bias
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on longer documents of MSMARCO, so it might struggle with especially short document text - for short text we recommend one of our MSMARCO-Passage trained models.
## Citation
If you use our model checkpoint please cite our work as:
```
@inproceedings{Hofstaetter2021_idcm,
author = {Sebastian Hofst{\"a}tter and Bhaskar Mitra and Hamed Zamani and Nick Craswell and Allan Hanbury},
title = {{Intra-Document Cascading: Learning to Select Passages for Neural Document Ranking}},
booktitle = {Proc. of SIGIR},
year = {2021},
}
``` |
sentence-transformers/msmarco-bert-co-condensor | 153347cd1e647921de84615b73c2d50788dd72df | 2021-09-24T10:57:05.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2108.05540",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/msmarco-bert-co-condensor | 97 | null | sentence-transformers | 4,663 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/msmarco-bert-co-condensor
This is a port of the [Luyu/co-condenser-marco-retriever](https://huggingface.co/Luyu/co-condenser-marco-retriever) model to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and is optimized for the task of semantic search.
It is based on the paper: [Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval](https://arxiv.org/abs/2108.05540)
## Evaluation
| Model | MS MARCO Dev (MRR@10) | TREC DL 2019 | TREC DL 2020 | FiQA (NDCG@10) | TREC COVID (NDCG@10) | TREC News (NDCG@10) | TREC Robust04 (NDCG@10) |
| ----- | :-------------------: | :----------: | :----------: | :------------: | :------------------: | :-----------------: | :--------------------: |
| [msmarco-roberta-base-ance-firstp](https://huggingface.co/sentence-transformers/msmarco-roberta-base-ance-firstp) | 33.01 | 67.84 | 66.04 | 29.5 | 67.12 | 38.2 | 39.2 |
| [msmarco-bert-co-condensor](https://huggingface.co/sentence-transformers/sentence-transformers/msmarco-bert-co-condensor) | 35.51 | 68.16 | 69.13 |26.04 | 66.89 | 28.54 | 30.71 |
| [msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) | 34.43 | 71.04 | 69.78 | 30.02 | 65.39 | 37.70 | 42.70 |
| [msmarco-distilbert-dot-v5](https://huggingface.co/sentence-transformers/msmarco-distilbert-dot-v5) | 37.25 | 70.14 | 71.08 | 28.61 | 71.96 | 37.88 | 38.29 |
| [msmarco-bert-base-dot-v5](https://huggingface.co/sentence-transformers/msmarco-bert-base-dot-v5) | 38.08 | 70.51 | 73.45 | 32.29 | 74.81 | 38.81 | 42.67 |
For more details on the comparison, see: [SBERT.net - MSMARCO Models](https://www.sbert.net/docs/pretrained-models/msmarco-v5.html)
In the paper, Gao & Callan claim a MS MARCO-Dev score of 38.2 (MRR@10). This is achieved by changing the benchmark: The orginal MS MARCO dataset just provides queries and text passages, from which you must retrieve the relevant passages for a given query.
In their [code](https://github.com/luyug/Dense/blob/454af38e06fe79aac8243b0fa31387c07ee874ab/examples/msmarco-passage-ranking/get_data.sh#L10), they combine the passages with the document titles from MS MARCO document task, i.e. they train and evaluate their model with additional information from a different benchmark. In the above table, the score of 35.41 (MRR@10) is on the MS MARCO Passages benchmark as it is proposed, without having the document titles.
They further trained their model with the document titles, which creates an information leackage: The document titles were re-constructed by the MS MARCO organizers at a later stage for the MS MARCO document benchmark. It was not possible to reconstruct all document titles for all passages. However, the distribution of having a title is not equal for relevant and non-relevant passages: 71.9% of the relevant passages have a document title, while only 64.4% of the non-relevant passages have a title. Hence, the model can learn that, as soon as there is a document title, the probability is higher that this passage is annotated as relevant. It will not make the decision based on the passage content, but by the artifact if there is a title or not.
The information leackage and the change of the benchmark likely leads to the inflated scores reported in the paper.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-bert-co-condensor')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-bert-co-condensor")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-bert-co-condensor")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-bert-co-condensor)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval](https://arxiv.org/abs/2108.05540) |
tdklab/hebert-finetuned-hebrew-squad | abf5f9b3a7509b8b104eed54d3f6441fb0ae2238 | 2022-04-07T09:36:33.000Z | [
"pytorch",
"bert",
"question-answering",
"Hebrew",
"dataset:tdklab/Hebrew_Squad_v1",
"transformers",
"generated_from_trainer",
"avichr/heBERT",
"he",
"model-index",
"autotrain_compatible"
] | question-answering | false | tdklab | null | tdklab/hebert-finetuned-hebrew-squad | 97 | 1 | transformers | 4,664 | ---
language: Hebrew
datasets:
- tdklab/Hebrew_Squad_v1
tags:
- generated_from_trainer
- avichr/heBERT
- he
model-index:
- name: hebert-finetuned-hebrew-squad
results: []
widget:
- text: "מתי הוקמה הכרמלית ?"
context: "כרמלית היא כלי תחבורה ציבורית תת-קרקעי, היחיד בישראל. הכרמלית מחברת בין שלושה אזורים מרכזיים בעיר חיפה: העיר התחתית, שכונת הדר ומרכז הכרמל. לכרמלית קו בודד ובו שש תחנות פעילות, היא מופעלת על ידי חברת הכרמלית חיפה בעמ. הקמתה של הכרמלית החלה במאי 1956 והסתיימה במרץ 1959. בניגוד לתפיסה הרווחת, לפיה הכרמלית היא רכבת תחתית, אין היא אלא פוניקולר, רכבל הנע על מסילה במקום להיות תלוי באוויר. שלא כמו רכבת, אין בקרונות הכרמלית מנוע, ומשקלם של הקרונות היורדים הוא הכוח העיקרי המניע את הקרונות העולים (מנוע בתחנת הקצה העליונה תורם אף הוא כוח הנעה)."
- text: "כמה תחנות יש בכרמלית?"
context: "כרמלית היא כלי תחבורה ציבורית תת-קרקעי, היחיד בישראל. הכרמלית מחברת בין שלושה אזורים מרכזיים בעיר חיפה: העיר התחתית, שכונת הדר ומרכז הכרמל. לכרמלית קו בודד ובו שש תחנות פעילות, היא מופעלת על ידי חברת הכרמלית חיפה בעמ. הקמתה של הכרמלית החלה במאי 1956 והסתיימה במרץ 1959. בניגוד לתפיסה הרווחת, לפיה הכרמלית היא רכבת תחתית, אין היא אלא פוניקולר, רכבל הנע על מסילה במקום להיות תלוי באוויר. שלא כמו רכבת, אין בקרונות הכרמלית מנוע, ומשקלם של הקרונות היורדים הוא הכוח העיקרי המניע את הקרונות העולים (מנוע בתחנת הקצה העליונה תורם אף הוא כוח הנעה)."
- text: "היכן נמצא בית המשפט העליון?"
context: "ירושלים היא עיר הבירה של מדינת ישראל , והעיר הגדולה ביותר בישראל בגודל האוכלוסייה. נכון לשנת 2021, מתגוררים בה כ-957 אלף תושבים. בירושלים שוכנים מוסדות הממשל של ישראל: הכנסת, בית המשפט העליון, משכן הנשיא, בית ראש הממשלה ורוב משרדי הממשלה. ירושלים שוכנת בהרי יהודה, על קו פרשת המים הארצי של ארץ ישראל, בין הים התיכון וים המלח, ברום של 570 עד 857 מטרים מעל פני הים."
- text: "מהן פירות הגפנים?"
context: "כרם הוא מטע שמגדלים בו עצי פרי מסוימים. בדרך כלל מתייחס המושג \"כרם\" למקום גידולן של גפנים, שפירותיהן, הענבים, משמשים למאכל ולייצור יין, אולם גם מטעי זיתים ושקד מצוי מכונים כרמים.הגפן היא שיח מטפס, ולכן בכרמי גפנים מוצבים קרדונים - עמודים שעליהם מדלים את הגפן, וחוטי שילוב (בכרמים מודרניים) התומכים בזמורות הצעירות, נושאות הפירות.בכרמים המגודלים בעל, נטועים כמאה עצים בדונם, ברווחים של כשלושה מטרים אחד מהשני, מודלים בדרך כלל על עמודים בגובה של כשני מטרים, כשעליהם חוטים בצורת סוכה, כך שבין העצים יש שטח פנוי המאפשר עיבוד של הקרקע - בעיקר קילטור החשוב מאוד לניצול יעיל יותר של המים.בכרמים המגודלים בהשקיה מקובל לנטוע 200–300 עצים לדונם, ובמקומות מסוימים באירופה אף 600 עצים. הגפנים נטועות בשורות, גובהן כמטר אחד, וחוטי שילוב עד גובה כ-180 ס\"מ, מחזיקים את הזמורות הצעירות."
- text: "כמה תושבים יש בירושלים?"
context: "ירושלים היא עיר הבירה של מדינת ישראל , והעיר הגדולה ביותר בישראל בגודל האוכלוסייה. נכון לשנת 2021, מתגוררים בה כ-957 אלף תושבים. בירושלים שוכנים מוסדות הממשל של ישראל: הכנסת, בית המשפט העליון, משכן הנשיא, בית ראש הממשלה ורוב משרדי הממשלה. ירושלים שוכנת בהרי יהודה, על קו פרשת המים הארצי של ארץ ישראל, בין הים התיכון וים המלח, ברום של 570 עד 857 מטרים מעל פני הים."
- text: "מה גרם לירידת מפלס המים?"
context: "הכנרת היא ימה בצפון מזרחה של ישראל. זהו אגם המים המתוקים הגדול בארץ ישראל. בעבר סיפקה הכנרת כרבע מצריכת המים בישראל, אך בעקבות ירידת מפלס המים כתוצאה משנות בצורת שפקדו את ישראל, פחתה שאיבת המים מהאגם ומתקני ההתפלה היו לספק המים העיקרי. כיום מספקת הכנרת בין 2 אחוזים מסך הצריכה ל־13 אחוזים. מפלס מי הכנרת משתנה תכופות על פי עונות השנה ובהתאם לשנים גשומות או שחונות ונמצא לרוב בתחום של 209 עד 212 מטרים מתחת לפני הים. בשנות בצורת נחשפים איים בכנרת עקב ירידת המפלס. הכנרת היא הימה המתוקה הנמוכה ביותר בעולם."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hebert-finetuned-hebrew-squad
This model fine-tunes avichr/heBERT model on SQuAD dataset auto-translated to Hebrew.
## Intended uses & limitations
Hebrew SQuAD
## Training and evaluation data
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| Hebrew_Squad_v1| train | 52,405 |
| Hebrew_Squad_v1| validation| 7,455 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
It took about 9.5 hours to finish training.
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
### Results
**Model size**: `415M`
| Metric | # Value |
| ------ | --------- |
| **Exact Match** | **42.6** |
| **F1** | **55.9** |
## Example Usage
```python
from transformers import pipeline
model_checkpoint = "tdklab/hebert-finetuned-hebrew-squad"
qa_pipeline = pipeline(
"question-answering",
model=model_checkpoint,
)
predictions = qa_pipeline({
'context': "ירושלים היא עיר הבירה של מדינת ישראל , והעיר הגדולה ביותר בישראל בגודל האוכלוסייה. נכון לשנת 2021, מתגוררים בה כ-957 אלף תושבים. בירושלים שוכנים מוסדות הממשל של ישראל: הכנסת, בית המשפט העליון, משכן הנשיא, בית ראש הממשלה ורוב משרדי הממשלה. ירושלים שוכנת בהרי יהודה, על קו פרשת המים הארצי של ארץ ישראל, בין הים התיכון וים המלח, ברום של 570 עד 857 מטרים מעל פני הים.",
'question': "מהי עיר הבירה של מדינת ישראל?"
})
print(predictions)
# output:
# {'score': 0.9999890327453613, 'start': 0, 'end': 7, 'answer': 'ירושלים'}
```
### About Us
Created by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.
This is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.
For more cooperation, please contact email:
Matan Ben-chorin: [email protected]
May Flaster: [email protected]
|
ccdv/lsg-distilroberta-base-4096 | 3224af86e9d209a1d9e41275b9e103ec423e8ea6 | 2022-07-25T05:36:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"long context",
"autotrain_compatible"
] | fill-mask | false | ccdv | null | ccdv/lsg-distilroberta-base-4096 | 97 | null | transformers | 4,665 | ---
language: en
tags:
- long context
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is a small version of the [distilroberta-base](https://huggingface.co/distilroberta-base) model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Support encoder-decoder but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-distilroberta-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilroberta-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-distilroberta-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-distilroberta-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilroberta-base-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilroberta-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilroberta-base-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilroberta-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilroberta-base-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
|
Intel/xlnet-base-cased-mrpc | 686aebce620f3a8944a3faaafefe031aad4ebc6c | 2022-04-21T07:46:07.000Z | [
"pytorch",
"xlnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/xlnet-base-cased-mrpc | 97 | null | transformers | 4,666 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: xlnet-base-cased-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8896672504378283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-mrpc
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7156
- Accuracy: 0.8456
- F1: 0.8897
- Combined Score: 0.8676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
mrm8488/gpt-neo-1.3B-8bit | ae21d4aaa623fb0e8a23ae75bbcfafb6ec17b949 | 2022-06-01T14:51:41.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:wtfpl"
] | text-generation | false | mrm8488 | null | mrm8488/gpt-neo-1.3B-8bit | 97 | null | transformers | 4,667 | ---
license: wtfpl
---
|
knkarthick/TOPIC-DIALOGSUM | 744efa4bc5bb1653445d6d254274e2b2a199d8fe | 2022-07-07T06:19:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | knkarthick | null | knkarthick/TOPIC-DIALOGSUM | 97 | null | transformers | 4,668 | Entry not found |
Cameron/BERT-SBIC-targetcategory | dcb394be5d011eb3a67c06cea07ab7ef40daf264 | 2021-05-18T17:23:42.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Cameron | null | Cameron/BERT-SBIC-targetcategory | 96 | null | transformers | 4,669 | Entry not found |
Geotrend/bert-base-en-fr-cased | 30b1dd5115bb2441a8c098ff08aa67048e70c71d | 2021-05-18T19:15:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-cased | 96 | 1 | transformers | 4,670 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-en-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Helsinki-NLP/opus-mt-en-sla | 64585957c615474bfad967cc8f526ee7961f7769 | 2021-01-18T08:16:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sla | 96 | null | transformers | 4,671 | ---
language:
- en
- be
- hr
- mk
- cs
- ru
- pl
- bg
- uk
- sl
- sla
tags:
- translation
license: apache-2.0
---
### eng-sla
* source group: English
* target group: Slavic languages
* OPUS readme: [eng-sla](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sla/README.md)
* model: transformer
* source language(s): eng
* target language(s): bel bel_Latn bos_Latn bul bul_Latn ces csb_Latn dsb hrv hsb mkd orv_Cyrl pol rue rus slv srp_Cyrl srp_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engces.eng.ces | 20.1 | 0.484 |
| news-test2008-engces.eng.ces | 17.7 | 0.461 |
| newstest2009-engces.eng.ces | 19.1 | 0.479 |
| newstest2010-engces.eng.ces | 19.3 | 0.483 |
| newstest2011-engces.eng.ces | 20.4 | 0.486 |
| newstest2012-engces.eng.ces | 18.3 | 0.461 |
| newstest2012-engrus.eng.rus | 27.4 | 0.551 |
| newstest2013-engces.eng.ces | 21.5 | 0.489 |
| newstest2013-engrus.eng.rus | 20.9 | 0.490 |
| newstest2015-encs-engces.eng.ces | 21.1 | 0.496 |
| newstest2015-enru-engrus.eng.rus | 24.5 | 0.536 |
| newstest2016-encs-engces.eng.ces | 23.6 | 0.515 |
| newstest2016-enru-engrus.eng.rus | 23.0 | 0.519 |
| newstest2017-encs-engces.eng.ces | 19.2 | 0.474 |
| newstest2017-enru-engrus.eng.rus | 25.0 | 0.541 |
| newstest2018-encs-engces.eng.ces | 19.3 | 0.479 |
| newstest2018-enru-engrus.eng.rus | 22.3 | 0.526 |
| newstest2019-encs-engces.eng.ces | 20.4 | 0.486 |
| newstest2019-enru-engrus.eng.rus | 24.0 | 0.506 |
| Tatoeba-test.eng-bel.eng.bel | 22.9 | 0.489 |
| Tatoeba-test.eng-bul.eng.bul | 46.7 | 0.652 |
| Tatoeba-test.eng-ces.eng.ces | 42.7 | 0.624 |
| Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.210 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.4 | 0.165 |
| Tatoeba-test.eng-hbs.eng.hbs | 40.3 | 0.616 |
| Tatoeba-test.eng-hsb.eng.hsb | 14.3 | 0.344 |
| Tatoeba-test.eng-mkd.eng.mkd | 44.1 | 0.635 |
| Tatoeba-test.eng.multi | 41.0 | 0.610 |
| Tatoeba-test.eng-orv.eng.orv | 0.3 | 0.014 |
| Tatoeba-test.eng-pol.eng.pol | 42.0 | 0.637 |
| Tatoeba-test.eng-rue.eng.rue | 0.3 | 0.012 |
| Tatoeba-test.eng-rus.eng.rus | 40.5 | 0.612 |
| Tatoeba-test.eng-slv.eng.slv | 18.8 | 0.357 |
| Tatoeba-test.eng-ukr.eng.ukr | 38.8 | 0.600 |
### System Info:
- hf_name: eng-sla
- source_languages: eng
- target_languages: sla
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sla/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
- src_constituents: {'eng'}
- tgt_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sla
- short_pair: en-sla
- chrF2_score: 0.61
- bleu: 41.0
- brevity_penalty: 0.976
- ref_len: 64809.0
- src_name: English
- tgt_name: Slavic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sla
- prefer_old: False
- long_pair: eng-sla
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-id | e22dc8508f8b092f6b45b3c8495b35b2bc7c2c68 | 2021-09-09T21:43:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"id",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-id | 96 | null | transformers | 4,672 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-id
* source languages: es
* target languages: id
* OPUS readme: [es-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-id/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-id/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-id/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-id/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.es.id | 21.1 | 0.516 |
|
Helsinki-NLP/opus-mt-tr-es | 25c7e6495932403c656eecab8729bdf49ee483c8 | 2021-09-11T10:49:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tr-es | 96 | null | transformers | 4,673 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tr-es
* source languages: tr
* target languages: es
* OPUS readme: [tr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.tr.es | 56.3 | 0.722 |
|
UBC-NLP/AraT5-msa-base | 11f4f6e8367594d53f180d458babbd6e7046d240 | 2022-05-26T18:26:35.000Z | [
"pytorch",
"tf",
"t5",
"ar",
"transformers",
"Arabic T5",
"MSA",
"Twitter",
"Arabic Dialect",
"Arabic Machine Translation",
"Arabic Text Summarization",
"Arabic News Title and Question Generation",
"Arabic Paraphrasing and Transliteration",
"Arabic Code-Switched Translation"
] | null | false | UBC-NLP | null | UBC-NLP/AraT5-msa-base | 96 | 2 | transformers | 4,674 | ---
language:
- ar
tags:
- Arabic T5
- MSA
- Twitter
- Arabic Dialect
- Arabic Machine Translation
- Arabic Text Summarization
- Arabic News Title and Question Generation
- Arabic Paraphrasing and Transliteration
- Arabic Code-Switched Translation
---
# AraT5-msa-base
# AraT5: Text-to-Text Transformers for Arabic Language Generation
<img src="https://huggingface.co/UBC-NLP/AraT5-base/resolve/main/AraT5_CR_new.png" alt="AraT5" width="45%" height="35%" align="right"/>
This is the repository accompanying our paper [AraT5: Text-to-Text Transformers for Arabic Language Understanding and Generation](https://aclanthology.org/2022.acl-long.47/). In this is the repository we Introduce **AraT5<sub>MSA</sub>**, **AraT5<sub>Tweet</sub>**, and **AraT5**: three powerful Arabic-specific text-to-text Transformer based models;
---
# How to use AraT5 models
Below is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset
``` bash
!python run_trainier_seq2seq_huggingface.py \
--learning_rate 5e-5 \
--max_target_length 128 --max_source_length 128 \
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 \
--model_name_or_path "UBC-NLP/AraT5-base" \
--output_dir "/content/AraT5_FT_title_generation" --overwrite_output_dir \
--num_train_epochs 3 \
--train_file "/content/ARGEn_title_genration_sample_train.tsv" \
--validation_file "/content/ARGEn_title_genration_sample_valid.tsv" \
--task "title_generation" --text_column "document" --summary_column "title" \
--load_best_model_at_end --metric_for_best_model "eval_bleu" --greater_is_better True --evaluation_strategy epoch --logging_strategy epoch --predict_with_generate\
--do_train --do_eval
```
For more details about the fine-tuning example, please read this notebook [](https://github.com/UBC-NLP/araT5/blob/main/examples/Fine_tuning_AraT5.ipynb)
In addition, we release the fine-tuned checkpoint of the News Title Generation (NGT) which is described in the paper. The model available at Huggingface ([UBC-NLP/AraT5-base-title-generation](https://huggingface.co/UBC-NLP/AraT5-base-title-generation)).
For more details, please visit our own [GitHub](https://github.com/UBC-NLP/araT5).
# AraT5 Models Checkpoints
AraT5 Pytorch and TensorFlow checkpoints are available on the Huggingface website for direct download and use ```exclusively for research```. ```For commercial use, please contact the authors via email @ (muhammad.mageed[at]ubc[dot]ca).```
| **Model** | **Link** |
|---------|:------------------:|
| **AraT5-base** | [https://huggingface.co/UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) |
| **AraT5-msa-base** | [https://huggingface.co/UBC-NLP/AraT5-msa-base](https://huggingface.co/UBC-NLP/AraT5-msa-base) |
| **AraT5-tweet-base** | [https://huggingface.co/UBC-NLP/AraT5-tweet-base](https://huggingface.co/UBC-NLP/AraT5-tweet-base) |
| **AraT5-msa-small** | [https://huggingface.co/UBC-NLP/AraT5-msa-small](https://huggingface.co/UBC-NLP/AraT5-msa-small) |
| **AraT5-tweet-small**| [https://huggingface.co/UBC-NLP/AraT5-tweet-small](https://huggingface.co/UBC-NLP/AraT5-tweet-small) |
# BibTex
If you use our models (Arat5-base, Arat5-msa-base, Arat5-tweet-base, Arat5-msa-small, or Arat5-tweet-small ) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{nagoudi-etal-2022-arat5,
title = "{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation",
author = "Nagoudi, El Moatez Billah and
Elmadany, AbdelRahim and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.47",
pages = "628--647",
abstract = "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: https://github.com/UBC-NLP/araT5.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
allenai/news_roberta_base | 1739782c98519dae4a4d79dd180d3f75b0a33a27 | 2021-05-20T13:35:01.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/news_roberta_base | 96 | null | transformers | 4,675 | Entry not found |
deepset/roberta-large-squad2-hp | 3d4caa9066ebb825f1aa054406e9a2f873368c20 | 2021-05-20T16:05:04.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | deepset | null | deepset/roberta-large-squad2-hp | 96 | 3 | transformers | 4,676 | Entry not found |
flax-community/clip-rsicd-v2 | fbe163da0609a2f185c22bb3af7b54ebad5a1800 | 2022-04-24T21:03:53.000Z | [
"pytorch",
"jax",
"clip",
"feature-extraction",
"transformers",
"vision"
] | feature-extraction | false | flax-community | null | flax-community/clip-rsicd-v2 | 96 | 5 | transformers | 4,677 | ---
tags:
- vision
---
# Model Card: clip-rsicd
## Model Details
This model is a fine-tuned [CLIP by OpenAI](https://huggingface.co/openai/clip-vit-base-patch32). It is designed with an aim to improve zero-shot image classification, text-to-image and image-to-image retrieval specifically on remote sensing images.
### Model Date
July 2021
### Model Type
The base model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
### Model Version
We release several checkpoints for `clip-rsicd` model. Refer to [our github repo](https://github.com/arampacha/CLIP-rsicd#evaluation-results) for performance metrics on zero-shot classification for each of those.
### Training
To reproduce the fine-tuning procedure one can use released [script](https://github.com/arampacha/CLIP-rsicd/blob/master/run_clip_flax_tv.py).
The model was trained using batch size 1024, adafactor optimizer with linear warmup and decay with peak learning rate 1e-4 on 1 TPU-v3-8.
Full log of the training run can be found on [WandB](https://wandb.ai/wandb/hf-flax-clip-rsicd/runs/2dj1exsw).
### Demo
Check out the model text-to-image and image-to-image capabilities using [this demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo).
### Documents
- [Fine-tuning CLIP on RSICD with HuggingFace and flax/jax on colab using TPU](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/Fine_tuning_CLIP_with_HF_on_TPU.ipynb)
### Use with Transformers
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("flax-community/clip-rsicd-v2")
processor = CLIPProcessor.from_pretrained("flax-community/clip-rsicd-v2")
url = "https://raw.githubusercontent.com/arampacha/CLIP-rsicd/master/data/stadium_1.jpg"
image = Image.open(requests.get(url, stream=True).raw)
labels = ["residential area", "playground", "stadium", "forest", "airport"]
inputs = processor(text=[f"a photo of a {l}" for l in labels], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
for l, p in zip(labels, probs[0]):
print(f"{l:<16} {p:.4f}")
```
[Try it on colab](https://colab.research.google.com/github/arampacha/CLIP-rsicd/blob/master/nbs/clip_rsicd_zero_shot.ipynb)
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification.
In addition, we can imagine applications in defense and law enforcement, climate change and global warming, and even some consumer applications. A partial list of applications can be found [here](https://github.com/arampacha/CLIP-rsicd#applications). In general we think such models can be useful as digital assistants for humans engaged in searching through large collections of images.
We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
## Data
The model was trained on publicly available remote sensing image captions datasets. Namely [RSICD](https://github.com/201528014227051/RSICD_optimal), [UCM](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and [Sydney](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ). More information on the datasets used can be found on [our project page](https://github.com/arampacha/CLIP-rsicd#dataset).
## Performance and Limitations
### Performance
| Model-name | k=1 | k=3 | k=5 | k=10 |
| -------------------------------- | ----- | ----- | ----- | ----- |
| original CLIP | 0.572 | 0.745 | 0.837 | 0.939 |
| clip-rsicd-v2 (this model) | **0.883** | **0.968** | **0.982** | **0.998** |
## Limitations
The model is fine-tuned on RSI data but can contain some biases and limitations of the original CLIP model. Refer to [CLIP model card](https://huggingface.co/openai/clip-vit-base-patch32#limitations) for details on those.
|
google/vit-large-patch32-224-in21k | aca4f3f0f317ae94659cbb186e8534ff1d3e25d1 | 2022-01-28T10:21:30.000Z | [
"pytorch",
"tf",
"jax",
"vit",
"feature-extraction",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"transformers",
"vision",
"license:apache-2.0"
] | feature-extraction | false | google | null | google/vit-large-patch32-224-in21k | 96 | null | transformers | 4,678 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
inference: false
---
# Vision Transformer (large-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
huggingtweets/porns_xx | c4fb2674fde92bc314caa33c0f8b03b589fe97c5 | 2021-08-07T13:34:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/porns_xx | 96 | null | transformers | 4,679 | ---
language: en
thumbnail: https://www.huggingtweets.com/porns_xx/1628343064919/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1423389132508782593/Meo5eDzd_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PORN HUB 🔞</div>
<div style="text-align: center; font-size: 14px;">@porns_xx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PORN HUB 🔞.
| Data | PORN HUB 🔞 |
| --- | --- |
| Tweets downloaded | 1399 |
| Retweets | 0 |
| Short tweets | 7 |
| Tweets kept | 1392 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/200x5dgt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @porns_xx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ha11ly3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ha11ly3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/porns_xx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mental/mental-roberta-base | 8166c3cb64a03c50b21c98ec1d6e4ab2c9617b07 | 2022-04-05T17:41:15.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2110.15621",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mental | null | mental/mental-roberta-base | 96 | 3 | transformers | 4,680 | # MentalRoBERTa
[MentalRoBERTa](https://arxiv.org/abs/2110.15621) is a model initialized with RoBERTa-Base (`cased_L-12_H-768_A-12`) and trained with mental health-related posts collected from Reddit.
We follow the standard pretraining protocols of BERT and RoBERTa with [Huggingface’s Transformers library](https://github.com/huggingface/transformers).
We use four Nvidia Tesla v100 GPUs to train the two language models. We set the batch size to 16 per GPU, evaluate every 1,000 steps, and train for 624,000 iterations. Training with four GPUs takes around eight days.
## Usage
Load the model via [Huggingface’s Transformers library](https://github.com/huggingface/transformers):
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("mental/mental-roberta-base")
model = AutoModel.from_pretrained("mental/mental-roberta-base")
```
## Paper
For more details, refer to the paper [MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare](https://arxiv.org/abs/2110.15621).
```
@inproceedings{ji2022mentalbert,
title = {{MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare}},
author = {Shaoxiong Ji and Tianlin Zhang and Luna Ansari and Jie Fu and Prayag Tiwari and Erik Cambria},
year = {2022},
booktitle = {Proceedings of LREC}
}
```
## Social Impact
We train and release masked language models for mental health to facilitate the automatic detection of mental disorders in online social content for non-clinical use.
The models may help social workers find potential individuals in need of early prevention.
However, the model predictions are not psychiatric diagnoses.
We recommend anyone who suffers from mental health issues to call the local mental health helpline and seek professional help if possible.
Data privacy is an important issue, and we try to minimize the privacy impact when using social posts for model training.
During the data collection process, we only use anonymous posts that are manifestly available to the public.
We do not collect user profiles even though they are also manifestly public online.
We have not attempted to identify the anonymous users or interact with any anonymous users.
The collected data are stored securely with password protection even though they are collected from the open web.
There might also be some bias, fairness, uncertainty, and interpretability issues during the data collection and model training.
Evaluation of those issues is essential in future research. |
pierreguillou/bert-base-cased-pt-lenerbr | 7b39cc6efc62a98450cd1257832760ca64b2d92f | 2022-01-04T08:51:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"pt",
"dataset:pierreguillou/lener_br_finetuning_language_model",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | pierreguillou | null | pierreguillou/bert-base-cased-pt-lenerbr | 96 | 3 | transformers | 4,681 | ---
language:
- pt
tags:
- generated_from_trainer
datasets:
- pierreguillou/lener_br_finetuning_language_model
model-index:
- name: checkpoints
results:
- task:
name: Fill Mask
type: fill-mask
dataset:
name: pierreguillou/lener_br_finetuning_language_model
type: pierreguillou/lener_br_finetuning_language_model
metrics:
- name: Loss
type: loss
value: 1.352389
widget:
- text: "Com efeito, se tal fosse possível, o Poder [MASK] – que não dispõe de função legislativa – passaria a desempenhar atribuição que lhe é institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, competência que não lhe pertence, com evidente transgressão ao princípio constitucional da separação de poderes."
---
## (BERT base) Language modeling in the legal domain in Portuguese (LeNER-Br)
**bert-base-cased-pt-lenerbr** is a Language Model in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective.
You can check as well the [version large of this model](https://huggingface.co/pierreguillou/bert-large-cased-pt-lenerbr).
## Blog post
This language model is used to get a NER model on the Portuguese judicial domain. You can check the fine-tuned NER model at [pierreguillou/ner-bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/ner-bert-base-cased-pt-lenerbr).
All informations and links are in this blog post: [NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Widget & APP
You can test this model into the widget of this page.
## Using the model for inference in production
````
# install pytorch: check https://pytorch.org/
# !pip install transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-base-cased-pt-lenerbr")
model = AutoModelForMaskedLM.from_pretrained("pierreguillou/bert-base-cased-pt-lenerbr")
````
## Training procedure
## Notebook
The notebook of finetuning ([Finetuning_language_model_BERtimbau_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/Finetuning_language_model_BERtimbau_LeNER_Br.ipynb)) is in github.
### Training results
````
Num examples = 3227
Num Epochs = 5
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 2020
Step Training Loss Validation Loss
100 1.988700 1.616412
200 1.724900 1.561100
300 1.713400 1.499991
400 1.687400 1.451414
500 1.579700 1.433665
600 1.556900 1.407338
700 1.591400 1.421942
800 1.546000 1.406395
900 1.510100 1.352389
1000 1.507100 1.394799
1100 1.462200 1.36809373471
```` |
readerbench/RoGPT2-medium | f0587ebf9f0be6c25a222b4026ca7f893031d94b | 2021-07-22T11:18:49.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"ro",
"transformers"
] | text-generation | false | readerbench | null | readerbench/RoGPT2-medium | 96 | null | transformers | 4,682 | Model card for RoGPT2-medium
---
language:
- ro
---
# RoGPT2: Romanian GPT2 for text generation
All models are available:
* [RoBERT-base](https://huggingface.co/readerbench/RoGPT2-base)
* [RoBERT-medium](https://huggingface.co/readerbench/RoGPT2-medium)
* [RoBERT-large](https://huggingface.co/readerbench/RoGPT2-large)
For code and evaluation check out [GitHub](https://github.com/readerbench/RoGPT2).
#### How to use
```python
# TensorFlow
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-medium')
model = TFAutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-medium')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='tf')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
# PyTorch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-medium')
model = AutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-medium')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='pt')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
```
## Training
---
### Corpus Statistics
| Corpus | Total size | Number of words | Number of sentences |
|:------:|:----------:|:---------------:|:-------------------:|
|OSCAR| 11.54 GB | 1745M | 48.46M |
|Wiki-Ro | 0.46 GB | 68M | 1.79M |
|Debates | 0.5 GB | 73M | 3.61M |
|Books | 4.37 GB | 667M | 37.39M |
|News | 0.15 GB | 23M | 0.77M |
### Training Statistics
| Version | Number of parameters | Number of epoch | Duration of an epoch | Context size | Batch size | PPL |
|:-------:|:--------------------:|:---------------:|:--------------------:|:----------:|:----------:|:---:|
| Base | 124M | 15 | 7h | 1024 | 72 | 22.96 |
| Medium | 354M | 10 | 22h | 1024 | 24 | 17.64 |
| Large | 774M | 5 | **45h** | 512 | 16 | **16.77**|
## Evaluation
---
### 1. MOROCO
| Model | Dialect | Md to Ro | Ro to Md |
|:-----------------:|:-------:|:--------:|:--------:|
| KRR + SK | 94.06 | 67.59 | 75.47 |
| BERT-base-ro | 95.98 | 69.90 | 78.08 |
| RoBERT-small | 95.76 | 69.05 | 80.15 |
| RoBERT-base |**97.24**| 68.80 | 82.37 |
| RoBERT-large | 97.21 | 69.50 | **83.26**|
| RoGPT2-base | 96.69 | 69.82 | 77.55 |
| RoGPT2-medium | 96.42 | 69.77 | 80.51 |
| RoGPT2-large | 96.93 |**71.07** | 82.56 |
### 2. LaRoSeDa
| Model | Binary: Accuracy | Binary: F1-Score | Multi-Class: Accuracy | Multi-Class: F1-Score |
|:------------:|:----------------:|:----------------:|:---------------------:|:---------------------:|
|BERT-base-ro | 98.07 | 97.94 | - |79.61 |
| RoDiBERT |**98.40** |**98.31** | - |83.01 |
| RoBERT-small | 97.44 | 97.43 | 89.30 |84.23 |
| RoBERT-base | 98.27 | 98.26 | 90.59 |86.27 |
| RoBERT-large | 98.20 | 98.19 |**90.93** |**86.63** |
| RoGPT2-base | 97.89 | 97.88 |89.65 |84.68 |
|RoGPT2-medium | 98.03 |98.04 | 90.29 | 85.37 |
| RoGPT2-large | 98.06 |98.07 | 90.26 | 84.89 |
### 3. RoSTS
| Model | Spearman dev-set | Spearman test-set | Pearson dev-set | Pearson test-set |
|:------------:|:----------------:|:-----------------:|:---------------:|:----------------:|
|BERT-base-ro | 84.26 | 80.86 | 84.59 | 81.59 |
|RoDiBERT | 77.07 | 71.47 | 77.13 | 72.25 |
|RoBERT-small | 82.06 | 78.06 | 81.66 | 78.49 |
|RoBERT-base | 84.93 | 80.39 | 85.03 | 80.39 |
|RoBERT-large |**86.25** |**83.15** |**86.58** |**83.76** |
|RoGPT2-base | 83.51 | 79.77 | 83.74 | 80.56 |
|RoGPT2-medium | 85.75 | 82.25 | 86.04 | 83.16 |
|RoGPT2-large | 85.70 | 82.64 | 86.14 | 83.46 |
### 4. WMT16
| Model | Decoder method | Ro-En | En-Ro |
|:------------:|:--------------:|:------:|:------:|
|mBART | - |**38.5**|**38.5**|
|OpenNMT | - | - | 24.7 |
|RoGPT2-base |Greedy | 30.37 | 20.27 |
|RoGPT2-base |Beam-search-4 | 31.26 | 22.31 |
|RoGPT2-base |Beam-search-8 | 31.39 | 22.95 |
|RoGPT2-medium |Greedy | 32.48 | 22.18 |
|RoGPT2-medium |Beam-search-4 | 34.08 | 24.03 |
|RoGPT2-medium |Beam-search-8 | 34.16 | 24.13 |
|RoGPT2-large |Greedy | 33.69 | 23.31 |
|RoGPT2-large |Beam-search-4 |34.40 |24.23 |
|RoGPT2-large |Beam-search-8 |34.51 |24.32 |
### 5. XQuAD
| Model |Decoder method | EM | F1-Score |
|:------------:|:-------------:|:-----:|:--------:|
|BERT-base-ro | - | 47.89 | 63.74 |
|RoDiBERT | - | 21.76 | 34.57 |
|RoBERT-small | - | 30.84 | 45.17 |
|RoBERT-base | - | 53.52 | 70.04 |
|RoBERT-large | - | 55.46 | 69.64 |
|mBERT | - | 59.9 | 72.7 |
|XLM-R Large | - |**69.7** |**83.6** |
|RoGPT2-base | Greedy | 23.69 | 35.97 |
|RoGPT2-base | Beam-search-4 | 24.11 | 35.27 |
|RoGPT2-medium | Greedy | 29.66 | 44.74 |
|RoGPT2-medium | Beam-search-4 | 31.59 | 45.32 |
|RoGPT2-large | Greedy | 29.74 | 42.98 |
|RoGPT2-large | Beam-search-4 | 29.66 | 43.05 |
|RoGPT2-base-en-ro | Greedy | 23.86 | 34.27 |
|RoGPT2-base-en-ro | Beam-search-4 | 25.04 | 34.51 |
|RoGPT2-medium-en-ro | Greedy | 27.05 | 39.75 |
|RoGPT2-medium-en-ro | Beam-search-4 | 27.64 | 39.11 |
|RoGPT2-large-en-ro | Greedy | 28.40 | 39.79 |
|RoGPT2-large-en-ro | Beam-search-4 | 28.73 | 39.71 |
|RoGPT2-large-en-ro-mask | Greedy | 31.34 | 44.71 |
|RoGPT2-large-en-ro-mask| Beam-search-4 | 31.59 | 43.53 |
### 6. Wiki-Ro: LM
| Model | PPL dev | PPL test |
|:------------:|:-------:|:--------:|
|BERT-base-ro | 29.0897 | 28.0043|
|RoGPT2-base | 34.3795 | 33.7460|
|RoGPT2-medium | 23.7879 | 23.4581|
|RoGPT2-large | **21.7491** | **21.5200** |
### 7. RoGEC
| Model | Decoder mothod | P | R | F<sub>0.5</sub> |
|:-----:|:--------------:|:---:|:---:|:------:|
|Transformer-tiny | Beam-search | 53.53 | 26.36 | 44.38 |
|Transformer-base Finetuning | Beam-search | 56.05 | 46.19 | 53.76 |
|Transformer-base Finetuning | Beam-search-LM | 50.68 | 45.39 | 49.52 |
|Transformer-base Finetuning | Beam-search-norm-LM | 51.06 | 45.43 | 49.83 |
|RoGPT2-base | Greedy | 59.02 | 49.35 | 56.80 |
|RoGPT2-base | Beam-search-4 | 65.23 | 49.26 | 61.26 |
|RoGPT2-base |Beam-search-8 | 65.88 | 49.64 | 61.84 |
|RoGPT2-medium | Greedy | 69.97 | 57.94 | 67.18 |
|RoGPT2-medium | Beam-search-4 | **72.46** | **57.99** | **69.01** |
|RoGPT2-medium | Beam-search-8 | 72.24 | 57.69 | 68.77 |
|RoGP2-large | Greedy | 61.90 | 49.09 | 58.83 |
|RoGP2-large | Beam-search-4 | 65.24 | 49.43 | 61.32 |
|RoGP2-large | Beam-search-8 | 64.96 | 49.22 | 61.06 |
|RoGPT2-base* | Greedy | 68.67 | 49.60 | 63.77 |
|RoGPT2-base* | Beam-search-4 | 71.16 | 50.53 | 65.79 |
|RoGPT2-base* | Beam-search-8 | 71.68 | 50.65 | 66.18 |
|RoGPT2-medium* | Greedy | 58.21 | 43.32 | 54.47 |
|RoGPT2-medium* | Beam-search-4 | 68.31 | 43.78 | 61.43 |
|RoGPT2-medium* | Beam-search-8 | 68.68 | 43.99 | 61.75 |
|RoGPT2-large* | Greedy | 64.86 | 41.30 | 58.22 |
|RoGPT2-large* | Beam-search-4 | 65.57 | 41.00 | 58.55 |
|RoGPT2-large* | Beam-search-8 | 65.44 | 41.09 | 58.50 |
**__Note__**: * the models were trained using the dataset of 3,000,000 artificially generated pairs
## Acknowledgments
---
Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
## How to cite
---
Niculescu, M. A., Ruseti, S., and Dascalu, M. (submitted). RoGPT2: Romanian GPT2 for Text Generation
|
vyang/plc2proc | c17288bf03cf8269231c7870e32543c403f54d2e | 2022-02-23T15:43:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | vyang | null | vyang/plc2proc | 96 | null | transformers | 4,683 | ---
license: apache-2.0
---
|
xhyi/PT_GPTNEO350_ATG | 56ab08aaa6802d0f830d42c352d5d536be72811d | 2022-07-27T19:23:11.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | xhyi | null | xhyi/PT_GPTNEO350_ATG | 96 | 7 | transformers | 4,684 |
# GPT NEO 350M
This hosts the pulled 350M that Eleuther removed. I am keeping it 😎 |
amandakonet/climatebert-fact-checking | a5675d3444ed1a3113a2ac9a4a565ae2f2b6c237 | 2022-04-16T22:39:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:climate_fever",
"transformers",
"fact-checking",
"climate",
"text entailment",
"license:mit"
] | text-classification | false | amandakonet | null | amandakonet/climatebert-fact-checking | 96 | null | transformers | 4,685 | ---
license: mit
language:
- en
datasets: climate_fever
tags:
- fact-checking
- climate
- text entailment
---
This model fine-tuned [ClimateBert](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the textual entailment task using Climate FEVER data. Given (claim, evidence) pairs, the model predicts support (entailment), refute (contradict), or not enough info (neutral). The model has 67% validation accuracy.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("amandakonet/climatebert-fact-checking")
tokenizer = AutoTokenizer.from_pretrained("amandakonet/climatebert-fact-checking")
features = tokenizer(['Beginning in 2005, however, polar ice modestly receded for several years'],
['Polar Discovery "Continued Sea Ice Decline in 2005'],
padding='max_length', truncation=True, return_tensors="pt", max_length=512)
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
``` |
anshr/distilgpt2_reward_model_02 | 49952bbcbe46d9d68e6d61bc15dbb99b6755bab2 | 2022-04-24T00:53:14.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | anshr | null | anshr/distilgpt2_reward_model_02 | 96 | null | transformers | 4,686 | Entry not found |
florentgbelidji/clip-text-feature-extraction | 64005001f2f133c67d81ba48c73381b843181e1f | 2022-07-06T08:28:24.000Z | [
"pytorch",
"feature-extraction",
"sentence_embedding"
] | feature-extraction | false | florentgbelidji | null | florentgbelidji/clip-text-feature-extraction | 96 | null | null | 4,687 | ---
tags:
- feature-extraction
- sentence_embedding
--- |
lgessler/coptic-bert-small-uncased | 0dd4abdc3a94b6d3a5648082b83c5f01152e1364 | 2022-07-21T19:38:13.000Z | [
"pytorch",
"bert",
"feature-extraction",
"cop",
"transformers"
] | feature-extraction | false | lgessler | null | lgessler/coptic-bert-small-uncased | 96 | null | transformers | 4,688 | ---
language: cop
widget:
- text: "ⲁⲩⲱ ⲉⲓⲥ ⲡⲉⲧⲙⲙⲁⲩ ⲁϥⲉⲓ ⲉϥⲣⲓⲙⲉ."
---
A small `BertModel` for Coptic. |
Finnish-NLP/convbert-base-finnish | 7ca436faf91f685e3a8137bec726012cf88fcbcf | 2022-06-13T16:15:25.000Z | [
"pytorch",
"tf",
"tensorboard",
"convbert",
"feature-extraction",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:2008.02496",
"transformers",
"finnish",
"license:apache-2.0"
] | feature-extraction | false | Finnish-NLP | null | Finnish-NLP/convbert-base-finnish | 95 | 1 | transformers | 4,689 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- convbert
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
---
# ConvBERT for Finnish
Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
[this paper](https://arxiv.org/abs/2008.02496)
and first released at [this page](https://github.com/yitu-opensource/ConvBert).
**Note**: this model is the ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ConvBERT generator model intented to be used for fill-mask task is released here [Finnish-NLP/convbert-base-generator-finnish](https://huggingface.co/Finnish-NLP/convbert-base-generator-finnish)
## Model description
Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.
Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning.
## Intended uses & limitations
You can use the raw model for extracting features or fine-tune it to a downstream task like text classification.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ConvBertTokenizer, ConvBertModel
import torch
tokenizer = ConvBertTokenizer.from_pretrained("Finnish-NLP/convbert-base-finnish")
model = ConvBertModel.from_pretrained("Finnish-NLP/convbert-base-finnish")
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="pt")
outputs = model(**inputs)
print(outputs.last_hidden_state)
```
and in TensorFlow:
```python
from transformers import ConvBertTokenizer, TFConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("Finnish-NLP/convbert-base-finnish")
model = TFConvBertModel.from_pretrained("Finnish-NLP/convbert-base-finnish")
inputs = tokenizer("Joka kuuseen kurkottaa, se katajaan kapsahtaa", return_tensors="tf")
outputs = model(inputs)
print(outputs.last_hidden_state)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ConvBERT model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ConvBERT repository](https://github.com/yitu-opensource/ConvBert) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/convbert/CHEATSHEET.md).
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our other models:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|-----------------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/convbert-base-finnish |86.98 |94.04 |95.02 |71.87 |
|Finnish-NLP/electra-base-discriminator-finnish |86.25 |93.78 |94.77 |70.20 |
|Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 |
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
To conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the [ConvBERT paper](https://arxiv.org/abs/2008.02496).
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Helsinki-NLP/opus-mt-es-pl | a1514156efe6b61a49e19e67d93628c482f63f9a | 2021-09-09T21:44:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"pl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-pl | 95 | null | transformers | 4,690 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pl
* source languages: es
* target languages: pl
* OPUS readme: [es-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.pl | 44.6 | 0.649 |
|
Helsinki-NLP/opus-mt-vi-es | d6f74862cce929649b728270c7337a97881bda23 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vi",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-vi-es | 95 | null | transformers | 4,691 | ---
language:
- vi
- es
tags:
- translation
license: apache-2.0
---
### vie-spa
* source group: Vietnamese
* target group: Spanish
* OPUS readme: [vie-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-spa/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.spa | 32.9 | 0.540 |
### System Info:
- hf_name: vie-spa
- source_languages: vie
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'es']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: spa
- short_pair: vi-es
- chrF2_score: 0.54
- bleu: 32.9
- brevity_penalty: 0.953
- ref_len: 3832.0
- src_name: Vietnamese
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: es
- prefer_old: False
- long_pair: vie-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
HooshvareLab/bert-fa-base-uncased-ner-arman | 889a2b8c1d7d4c8bca305365069ad3045a00c224 | 2021-05-18T20:52:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | HooshvareLab | null | HooshvareLab/bert-fa-base-uncased-ner-arman | 95 | null | transformers | 4,692 | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian NER [ARMAN, PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|-------------|-------------|-------|------------|--------------|----------|----------------|------------|
| ARMAN | 99.84* | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
IDEA-CCNL/Zhouwenwang-Unified-1.3B | 9cfe461021dbc10c2fc8657b6794d07f643b4a79 | 2022-04-12T02:03:58.000Z | [
"pytorch",
"megatron-bert",
"zh",
"transformers",
"license:apache-2.0"
] | null | false | IDEA-CCNL | null | IDEA-CCNL/Zhouwenwang-Unified-1.3B | 95 | null | transformers | 4,693 | ---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# Zhouwenwang-Unified-1.3B model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
Zhouwenwang-Unified-1.3B apply a new unified structure, and jointly developed by the IDEA-CCNL and Zhuiyi Technology. In the pre-training, the model considers LM (Language Model) and MLM (Mask Language Model) tasks uniformly, and adds rotational position coding, so that the model has the ability to generate and understand. Zhouwenwang-Unified-1.3B is the largest model for LM and MLM tasks in the Chinese field. It will continue to be optimized in the direction of model scale, knowledge integration, and supervision task assistance.
## Usage
There is no structure of Zhouwenwang-Unified-1.3B in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of Zhouwenwang-Unified-1.3B from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### Load model
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
```
### Generate task
You can use Zhouwenwang-1.3B to continue writing
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = '清华大学位于'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == '。':
break
print(sentence)
```
## Scores on downstream chinese tasks (without any data augmentation)
| Model| afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
| roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 |
| Zhouwenwang-Unified-1.3B | 0.7463 | 0.6036 | 0.6288 | 0.7654 | 0.7741 | 0.8849 | 0. 8777 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
akahana/vit-base-cats-vs-dogs | b7e917ca8728ad2138712aa863a87303c453b0e6 | 2021-12-09T04:36:57.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:cats_vs_dogs",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | akahana | null | akahana/vit-base-cats-vs-dogs | 95 | null | transformers | 4,694 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- cats_vs_dogs
metrics:
- accuracy
model-index:
- name: vit-base-cats-vs-dogs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cats_vs_dogs
type: cats_vs_dogs
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9883257403189066
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cats-vs-dogs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0369
- Accuracy: 0.9883
## how to use
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('akahana/vit-base-cats-vs-dogs')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0949 | 1.0 | 2488 | 0.0369 | 0.9883 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dbmdz/electra-base-italian-xxl-cased-generator | 900e1c97be444c42439b796c16e5b62d899b6e5f | 2020-12-11T21:37:22.000Z | [
"pytorch",
"electra",
"fill-mask",
"it",
"dataset:wikipedia",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/electra-base-italian-xxl-cased-generator | 95 | null | transformers | 4,695 | ---
language: it
license: mit
datasets:
- wikipedia
---
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
ethanyt/guwenbert-large | 183588933ec4f07a29c05c2ef116c2074233c078 | 2021-06-02T03:24:26.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"zh",
"transformers",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | ethanyt | null | ethanyt/guwenbert-large | 95 | 1 | transformers | 4,696 | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "[MASK]太元中,武陵人捕鱼为业。"
- text: "问征夫以前路,恨晨光之[MASK]微。"
- text: "浔阳江头夜送客,枫叶[MASK]花秋瑟瑟。"
---
# GuwenBERT
## Model description

This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-large")
model = AutoModel.from_pretrained("ethanyt/guwenbert-large")
```
## Training data
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
## Training procedure
The models are initialized with `hfl/chinese-roberta-wwm-ext-large` and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 1e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
## Eval results
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
| NE Type | Precision | Recall | F1 |
|:----------:|:-----------:|:------:|:-----:|
| Book Name | 77.50 | 73.73 | 75.57 |
| Other Name | 85.85 | 89.32 | 87.55 |
| Micro Avg. | 83.88 | 85.39 | 84.63 |
## About Us
We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] qq.com
> Created with ❤️ by Tan Yan [](https://github.com/Ethan-yt) and Zewen Chi [](https://github.com/CZWin32768) |
google/t5-efficient-small-dm768 | 447413aaa95f9c43eaf49f6202763dbd560e6f7e | 2022-02-15T10:56:46.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-small-dm768 | 95 | null | transformers | 4,697 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-DM768 (Deep-Narrow version)
T5-Efficient-SMALL-DM768 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-dm768** - is of model type **Small** with the following variations:
- **dm** is **768**
It has **90.77** million parameters and thus requires *ca.* **363.1 MB** of memory in full precision (*fp32*)
or **181.55 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
microsoft/unilm-base-cased | 9e2691e6ff9711fbd922f0b18dc0a6b77c7cc530 | 2020-04-28T21:22:52.000Z | [
"pytorch",
"transformers"
] | null | false | microsoft | null | microsoft/unilm-base-cased | 95 | null | transformers | 4,698 | Entry not found |
nateraw/huggingpics-package-demo | 99bd2b30940d28bfc7ff7bfdd667d1b8c8784b51 | 2021-11-09T20:44:45.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/huggingpics-package-demo | 95 | null | transformers | 4,699 | ---
license: apache-2.0
tags:
- image-classification
- huggingpics
- generated_from_trainer
model-index:
- name: huggingpics-package-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingpics-package-demo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3761
- Acc: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0328 | 1.0 | 24 | 0.9442 | 0.7463 |
| 0.8742 | 2.0 | 48 | 0.7099 | 0.9403 |
| 0.6451 | 3.0 | 72 | 0.5050 | 0.9403 |
| 0.508 | 4.0 | 96 | 0.3761 | 0.9403 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.