modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
julien-c/DPRNNTasNet-ks16_WHAM_sepclean | a4b441f4b89dffd87d67dadc78cf9f2b9c1f8581 | 2021-09-23T16:04:27.000Z | [
"pytorch",
"dataset:wham",
"dataset:sep_clean",
"arxiv:2005.04132",
"asteroid",
"audio-to-audio",
"audio",
"audio-source-separation",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | julien-c | null | julien-c/DPRNNTasNet-ks16_WHAM_sepclean | 114 | 2 | asteroid | 4,400 | ---
tags:
- audio-to-audio
- asteroid
- audio
- audio-source-separation
datasets:
- wham
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean`
♻️ Imported from https://zenodo.org/record/3903795#.X8pMBRNKjUI
This model was trained by Manuel Pariente using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the sep_clean task of the WHAM! dataset.
### Demo: How to use in Asteroid
```python
# coming soon
```
### Training config
- data:
- mode: min
- nondefault_nsrc: None
- sample_rate: 8000
- segment: 2.0
- task: sep_clean
- train_dir: data/wav8k/min/tr
- valid_dir: data/wav8k/min/cv
- filterbank:
- kernel_size: 16
- n_filters: 64
- stride: 8
- main_args:
- exp_dir: exp/train_dprnn_ks16/
- help: None
- masknet:
- bidirectional: True
- bn_chan: 128
- chunk_size: 100
- dropout: 0
- hid_size: 128
- hop_size: 50
- in_chan: 64
- mask_act: sigmoid
- n_repeats: 6
- n_src: 2
- out_chan: 64
- optim:
- lr: 0.001
- optimizer: adam
- weight_decay: 1e-05
- positional arguments:
- training:
- batch_size: 6
- early_stop: True
- epochs: 200
- gradient_clipping: 5
- half_lr: True
- num_workers: 6
#### Results
- `si_sdr`: 18.227683982688003
- `si_sdr_imp`: 18.22883576588251
- `sdr`: 18.617789605060587
- `sdr_imp`: 18.466745426438173
- `sir`: 29.22773720052717
- `sir_imp`: 29.07669302190474
- `sar`: 19.116352171914485
- `sar_imp`: -130.06009796503054
- `stoi`: 0.9722025377865715
- `stoi_imp`: 0.23415680987800583
### Citing Asteroid
```BibTex
@inproceedings{Pariente2020Asteroid,
title={Asteroid: the {PyTorch}-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and
Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and
Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge
and Emmanuel Vincent},
year={2020},
booktitle={Proc. Interspeech},
}
```
Or on arXiv:
```bibtex
@misc{pariente2020asteroid,
title={Asteroid: the PyTorch-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge and Emmanuel Vincent},
year={2020},
eprint={2005.04132},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` |
microsoft/wavlm-base | efa81aae7ff777e464159e0f877d54eac5b84f81 | 2021-12-22T17:23:36.000Z | [
"pytorch",
"wavlm",
"feature-extraction",
"en",
"arxiv:2110.13900",
"transformers",
"speech"
] | feature-extraction | false | microsoft | null | microsoft/wavlm-base | 114 | 1 | transformers | 4,401 | ---
language:
- en
datasets:
tags:
- speech
inference: false
---
# WavLM-Base
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The base model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
The model was pre-trained on 960h of [Librispeech](https://huggingface.co/datasets/librispeech_asr).
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Usage
This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the [SUPERB benchmark](https://superbbenchmark.org/).
**Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
of phonemes before fine-tuning.
## Speech Recognition
To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition).
## Speech Classification
To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
## Speaker Verification
TODO
## Speaker Diarization
TODO
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
mrm8488/bert2bert_shared-turkish-summarization | 4d2ab3ea33c5e3cafa5a94b444376321fc404733 | 2021-05-22T11:11:45.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"tr",
"dataset:mlsum",
"transformers",
"summarization",
"news",
"autotrain_compatible"
] | summarization | false | mrm8488 | null | mrm8488/bert2bert_shared-turkish-summarization | 114 | 1 | transformers | 4,402 | ---
tags:
- summarization
- news
language: tr
datasets:
- mlsum
widget:
- text: "Ankara'da oto hırsızlık çetesine yönelikdüzenlenen ‘Balta’ operasyonunda, çete lideri‘balta’ lakaplı şahıs ile 7 kişi gözaltına alındı.Diğer bir operasyonda ise 3 şüpheli çaldıklarıaraçları parçalarken yapılan baskında suçüstüyakalandı. Ankara Emniyet Müdürlüğü’ne bağlıAsayiş Şube Müdürlüğü Oto Hırsızlık Büro Amirliğiekipleri, Ankara ilinde meydana gelen, otohırsızlık olaylarına karşı Ankara CumhuriyetBaşsavcılığı’nın izniyle yürüttükleri 3 aylıkçalışma sonucunda operasyon düğmesine bastı.Yapılan teknik ve fiziki takip sonucunda, ‘Balta’çetesine ulaşıldı. Çeteyi izleyen ekipler, Ankara,Konya ve Antalya’da eş zamanlı operasyondüzenleyerek çete lideri ‘Balta’ lakaplı Necati D.ve çete üyesi 7 kişiyi yakaladı. Takip edildiğinianlayınca ortadan kayboldu Çete lideri ‘Balta’nın,polis ekipleri tarafından izlendiğini anladığı veaylarca ortada görünmediğini tespit eden HırsızlıkBüro ekipleri, ‘Balta’nın kendi suç ortaklarını dadolandırmaya çalıştığını saptadı. Adliyeye sevkedilen şüphelilerden haklarında çok sayıda otohırsızlık kaydı bulunan çete lideri Necati D.,Ferhat K., Atakan A. ve Tayfun G., çıkarıldıklarınöbetçi sulh hakimliğince tutuklanarak cezaevinegönderildi. Diğer 3 şüpheli ise adli kontrolşartıyla serbest bırakıldı. Çaldıkları araçlarıparçalarken polis bastı Diğer bir olay iseAltındağ ilçesinde meydana geldi. Hırsızlık Büroekipleri inceledikleri 2 oto hırsızlık olayınınsonucunda 3 şüpheliyi takibe aldı. Şüphelilerinçaldıkları 2 aracı İvedik Hurdacılar Sitesi’ndekidepolarında parçalayacaklarını belirleyen ekiplerharekete geçti. Depoya baskın yapan polisekipleri, 3 şüpheliyi suçüstü yakaladı.Emniyetteki işlemlerinin ardından adliyeye sevkedilen hırsızlık zanlıları, çıkarıldıkları nöbetçimahkeme tarafından adli kontrol şartıyla serbestbırakıldı."
---
# Turkish BERT2BERT (shared) fine-tuned on MLSUM TR for summarization
## Model
[dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) (BERT Checkpoint)
## Dataset
**MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, **Turkish**. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
[MLSUM tu/tr](https://huggingface.co/datasets/viewer/?dataset=mlsum)
## Results
|Set|Metric| Value|
|----|------|------|
| Test |Rouge2 - mid -precision | **32.41**|
| Test | Rouge2 - mid - recall | **28.65**|
| Test | Rouge2 - mid - fmeasure | **29.48**|
## Usage
```python
import torch
from transformers import BertTokenizerFast, EncoderDecoderModel
device = 'cuda' if torch.cuda.is_available() else 'cpu'
ckpt = 'mrm8488/bert2bert_shared-turkish-summarization'
tokenizer = BertTokenizerFast.from_pretrained(ckpt)
model = EncoderDecoderModel.from_pretrained(ckpt).to(device)
def generate_summary(text):
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Your text here..."
generate_summary(text)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/electra-small-finetuned-squadv2 | 04a43d04d9d117c76f0d26bacfb613ceee028a14 | 2020-12-11T21:54:01.000Z | [
"pytorch",
"electra",
"question-answering",
"en",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/electra-small-finetuned-squadv2 | 114 | 1 | transformers | 4,403 | ---
language: en
---
# Electra small ⚡ + SQuAD v2 ❓
[Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
## Details of the downstream task (Q&A) - Dataset 📚
**SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'google/electra-small-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v2.0.json' \
--predict_file '/content/dataset/dev-v2.0.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000 \
--version_2_with_negative
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **69.71** |
| **F1** | **73.44** |
| **Size**| **50 MB** |
```json
{
'exact': 69.71279373368147,
'f1': 73.4439546123672,
'total': 11873,
'HasAns_exact': 69.92240215924427,
'HasAns_f1': 77.39542393937836,
'HasAns_total': 5928,
'NoAns_exact': 69.50378469301934,
'NoAns_f1': 69.50378469301934,
'NoAns_total': 5945,
'best_exact': 69.71279373368147,
'best_exact_thresh': 0.0,
'best_f1': 73.44395461236732,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv2')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.8650811568752914, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/roberta-med-small_shared-finetuned-bbc_xsum-summarization | 96faa028d94d2c281103683feb212228f1dff23f | 2021-04-05T11:28:41.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:xsum",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | mrm8488 | null | mrm8488/roberta-med-small_shared-finetuned-bbc_xsum-summarization | 114 | null | transformers | 4,404 | ---
language: en
license: apache-2.0
datasets:
- xsum
tags:
- summarization
---
Shared RoBERTa2RoBERTa (med-small) Summarization with 🤗EncoderDecoder Framework
This model is a warm-started *RoBERTaShared* (med-small) model fine-tuned on the *BBC XSum* summarization dataset. |
pszemraj/led-base-book-summary | 403a4a9dfdc094b0b605a302d05b5b2225d4e511 | 2022-07-27T10:19:51.000Z | [
"pytorch",
"led",
"text2text-generation",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"summary",
"longformer",
"booksum",
"long-document",
"long-form",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | pszemraj | null | pszemraj/led-base-book-summary | 114 | 1 | transformers | 4,405 | ---
tags:
- summarization
- led
- summary
- longformer
- booksum
- long-document
- long-form
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
\ are fed into a neural network that predicts values in the reconstructed domain.\
\ Then, this domain is mapped to the sensor domain where sensor measurements are\
\ available as supervision. Class and Section Problems Addressed Generalization\
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
\ Representations (Section 3) Computation & memory efficiency, representation\
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
\ of techniques in the neural field toolbox each addresses problems that arise\
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\
\ reconstruction via 2D images; Section 4) With appropriate network architecture\
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
\ fields to add constraints and regularizations, and to achieve editable representations\
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
\ to help solve problems with neural fields There are three components in a conditional\
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
\ field itself $. The encoder \u20AC finds the most probable z given the observations\
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
\ schemes with different optimality guarantees (Section 2.1.1), both global and\
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
\ shape given a partial or noisy point cloud. We need a suitable prior over the\
\ sur- face in its reconstruction domain to generalize to the partial observations.\
\ A neural network expresses a prior via the function space of its architecture\
\ and parameters 0, and generalization is influenced by the inductive bias of\
\ this function space (Section 5)."
example_title: scientific paper
- text: ' the big variety of data coming from diverse sources is one of the key properties
of the big data phenomenon. It is, therefore, beneficial to understand how data
is generated in various environments and scenarios, before looking at what should
be done with this data and how to design the best possible architecture to accomplish
this The evolution of IT architectures, described in Chapter 2, means that the
data is no longer processed by a few big monolith systems, but rather by a group
of services In parallel to the processing layer, the underlying data storage has
also changed and became more distributed This, in turn, required a significant
paradigm shift as the traditional approach to transactions (ACID) could no longer
be supported. On top of this, cloud computing is becoming a major approach with
the benefits of reducing costs and providing on-demand scalability but at the
same time introducing concerns about privacy, data ownership, etc In the meantime
the Internet continues its exponential growth: Every day both structured and unstructured
data is published and available for processing: To achieve competitive advantage
companies have to relate their corporate resources to external services, e.g.
financial markets, weather forecasts, social media, etc While several of the sites
provide some sort of API to access the data in a more orderly fashion; countless
sources require advanced web mining and Natural Language Processing (NLP) processing
techniques: Advances in science push researchers to construct new instruments
for observing the universe O conducting experiments to understand even better
the laws of physics and other domains. Every year humans have at their disposal
new telescopes, space probes, particle accelerators, etc These instruments generate
huge streams of data, which need to be stored and analyzed. The constant drive
for efficiency in the industry motivates the introduction of new automation techniques
and process optimization: This could not be done without analyzing the precise
data that describe these processes. As more and more human tasks are automated,
machines provide rich data sets, which can be analyzed in real-time to drive efficiency
to new levels. Finally, it is now evident that the growth of the Internet of Things
is becoming a major source of data. More and more of the devices are equipped
with significant computational power and can generate a continuous data stream
from their sensors. In the subsequent sections of this chapter, we will look at
the domains described above to see what they generate in terms of data sets. We
will compare the volumes but will also look at what is characteristic and important
from their respective points of view. 3.1 The Internet is undoubtedly the largest
database ever created by humans. While several well described; cleaned, and structured
data sets have been made available through this medium, most of the resources
are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
several examples in the areas such as opinion mining, social media analysis, e-governance,
etc, clearly show the potential lying in these resources. Those who can successfully
mine and interpret the Internet data can gain unique insight and competitive advantage
in their business An important area of data analytics on the edge of corporate
IT and the Internet is Web Analytics.'
example_title: data science textbook
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\
\ try to remedy this problem by approximating the full attention matrix. You can\
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\
\ models.\nBigBird (introduced in paper) is one of such recent models to address\
\ this issue. BigBird relies on block sparse attention instead of normal attention\
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\
\ much lower computational cost compared to BERT. It has achieved SOTA on various\
\ tasks involving very long sequences such as long documents summarization, question-answering\
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\
Transformers. The goal of this post is to give the reader an in-depth understanding\
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\
Transformers. But, before going into more depth, it is important to remember that\
\ the BigBird's attention is an approximation of BERT's full attention and therefore\
\ does not strive to be better than BERT's full attention, but rather to be more\
\ efficient. It simply allows to apply transformer-based models to much longer\
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\
\ would be preferred over block sparse attention (which we are going to discuss\
\ in this post).\nIf you wonder why we need more compute when working with longer\
\ sequences, this blog post is just right for you!\nSome of the main questions\
\ one might have when working with standard BERT-like attention include:\nDo all\
\ tokens really have to attend to all other tokens? Why not compute attention\
\ only over important tokens? How to decide what tokens are important? How to\
\ attend to just a few tokens in a very efficient way? In this blog post, we will\
\ try to answer those questions.\nWhat tokens should be attended to? We will give\
\ a practical example of how attention works by considering the sentence 'BigBird\
\ is now available in HuggingFace for extractive question answering'. In BERT-like\
\ attention, every word would simply attend to all other tokens.\nLet's think\
\ about a sensible choice of key tokens that a queried token actually only should\
\ attend to by writing some pseudo-code. Will will assume that the token available\
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\
>>> # further let's assume, we're trying to understand the representation of 'available'\
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\
\ tokens should be important because, in a sentence (sequence of words), the current\
\ word is highly dependent on neighboring past & future tokens. This intuition\
\ is the idea behind the concept of sliding attention."
example_title: bigbird blog intro
- text: 'The majority of available text summarization datasets include short-form
source documents that lack long-range causal and temporal dependencies, and often
contain strong layout and stylistic biases. While relevant, such datasets will
offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form
narrative summarization. Our dataset covers source documents from the literature
domain, such as novels, plays and stories, and includes highly abstractive, human
written summaries on three levels of granularity of increasing difficulty: paragraph-,
chapter-, and book-level. The domain and structure of our dataset poses a unique
set of challenges for summarization systems, which include: processing very long
documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive
summarization models as baselines for our dataset.'
example_title: BookSum Abstract
inference:
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
length_penalty: 0.3
encoder_no_repeat_ngram_size: 3
num_beams: 4
model-index:
- name: pszemraj/led-base-book-summary
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 33.4536
verified: true
- name: ROUGE-2
type: rouge
value: 5.2232
verified: true
- name: ROUGE-L
type: rouge
value: 16.2044
verified: true
- name: ROUGE-LSUM
type: rouge
value: 29.9765
verified: true
- name: loss
type: loss
value: 3.1985862255096436
verified: true
- name: gen_len
type: gen_len
value: 191.9783
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 32.0
verified: true
- name: ROUGE-2
type: rouge
value: 10.0781
verified: true
- name: ROUGE-L
type: rouge
value: 23.6331
verified: true
- name: ROUGE-LSUM
type: rouge
value: 28.7831
verified: true
- name: loss
type: loss
value: 2.903024673461914
verified: true
- name: gen_len
type: gen_len
value: 60.7411
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 30.5046
verified: true
- name: ROUGE-2
type: rouge
value: 13.2577
verified: true
- name: ROUGE-L
type: rouge
value: 19.0306
verified: true
- name: ROUGE-LSUM
type: rouge
value: 28.3421
verified: true
- name: loss
type: loss
value: 3.9484164714813232
verified: true
- name: gen_len
type: gen_len
value: 231.0762
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 36.8502
verified: true
- name: ROUGE-2
type: rouge
value: 15.9147
verified: true
- name: ROUGE-L
type: rouge
value: 23.4762
verified: true
- name: ROUGE-LSUM
type: rouge
value: 30.9597
verified: true
- name: loss
type: loss
value: 3.878790855407715
verified: true
- name: gen_len
type: gen_len
value: 131.3622
verified: true
---
# Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization
- **What:** This is the (current) result of the quest for a summarization model that condenses technical/long information down well _in general, academic and narrative usage
- **Use cases:** long narrative summarization (think stories - as the dataset intended), article/paper/textbook/other summarization, technical:simple summarization.
- Models trained on this dataset tend to also _explain_ what they are summarizing, which IMO is awesome.
- works well on lots of text, and can hand 16384 tokens/batch.
## About
- Trained for 16 epochs vs. [`pszemraj/led-base-16384-finetuned-booksum`](https://huggingface.co/pszemraj/led-base-16384-finetuned-booksum),
- parameters adjusted for _very_ fine-tuning type training (super low LR, etc)
- all the parameters for generation on the API are the same for easy comparison between versions.
## Other Checkpoints on Booksum
- See [led-large-book-summary](https://huggingface.co/pszemraj/led-large-book-summary) for LED-large trained on the same dataset.
---
# Usage - Basics
- it is recommended to use `encoder_no_repeat_ngram_size=3` when calling the pipeline object to improve summary quality.
- this param forces the model to use new vocabulary and create an abstractive summary otherwise it may l compile the best _extractive_ summary from the input provided.
- create the pipeline object:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import pipeline
hf_name = 'pszemraj/led-base-book-summary'
_model = AutoModelForSeq2SeqLM.from_pretrained(
hf_name,
low_cpu_mem_usage=True,
)
_tokenizer = AutoTokenizer.from_pretrained(
hf_name
)
summarizer = pipeline(
"summarization",
model=_model,
tokenizer=_tokenizer
)
```
- put words into the pipeline object:
```
wall_of_text = "your words here"
result = summarizer(
wall_of_text,
min_length=8,
max_length=256,
no_repeat_ngram_size=3,
encoder_no_repeat_ngram_size=3,
repetition_penalty=3.5,
num_beams=4,
do_sample=False,
early_stopping=True,
)
print(result[0]['generated_text'])
```
---
|
sosuke/ease-bert-base-uncased | de261cc6fcb12e91dcf19feb88c5f30179746209 | 2021-12-29T08:02:04.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sosuke | null | sosuke/ease-bert-base-uncased | 114 | null | transformers | 4,406 | Entry not found |
vkorennoy/gpt3_medium | 8d03577c67a66719c6bdc51ed5da7e6d6cf64fbe | 2021-11-21T20:55:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | vkorennoy | null | vkorennoy/gpt3_medium | 114 | null | transformers | 4,407 | Entry not found |
yhavinga/t5-v1.1-base-dutch-cased | 572919e67fdfc0977d10ffeda46689d7cc4f623f | 2022-06-14T10:28:45.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"arxiv:1910.10683",
"arxiv:2109.10686",
"transformers",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yhavinga | null | yhavinga/t5-v1.1-base-dutch-cased | 114 | 1 | transformers | 4,408 | ---
language:
- nl
datasets:
- yhavinga/mc4_nl_cleaned
tags:
- t5
- seq2seq
inference: false
license: apache-2.0
---
# t5-v1.1-base-dutch-cased
A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model
pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned).
This **t5-v1.1** model has **247M** parameters.
It was pre-trained with the masked language modeling objective on the dataset
`mc4_nl_cleaned` config `full` for **2** epoch(s) and a duration of **6d6h**,
with a sequence length of **1024**, batch size **64** and **1210154** total steps (**79B** tokens).
Pre-training evaluation loss and accuracy are **0,96** and **0,78**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
* Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off.
* For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for
the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture
and configs, though it must be noted that this model (t5-v1.1-base-dutch-cased) is unrelated to these projects and not an 'official' checkpoint.
* **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*.
* **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch mc4 with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been evaluated on summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

The next two sections provide more information on how the evaluation was performed.
## Evaluation on summarization
The models below have been evaluated for summarization on 50K samples from the CNN Dailymail dataset.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 1e-3 after a
warmup of 32 steps, with a label smoothing factor of 0.05. Article and summary token lengths were set to 1024 and 142.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
The numbers reported are the Rouge scores on 1000 documents from the test split. The rouge1 score is visualized in the
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
## Evaluation on translation
The models below have been evaluated for English to Dutch translation on 50K samples from the CCMatrix dataset.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 5e-5 after a
warmup of 32 steps, with a label smoothing factor of 0.1 and maximum sequence length of 128 tokens.
The numbers reported are the Bleu scores on 1000 documents from the test split.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
facebook/data2vec-vision-large | 6e101b3a00728b37698cffb19925ffa6fceba108 | 2022-05-03T15:57:29.000Z | [
"pytorch",
"tf",
"data2vec-vision",
"feature-extraction",
"dataset:imagenet",
"dataset:imagenet-1k",
"arxiv:2202.03555",
"arxiv:2106.08254",
"transformers",
"image-classification",
"vision",
"license:apache-2.0"
] | feature-extraction | false | facebook | null | facebook/data2vec-vision-large | 114 | null | transformers | 4,409 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-1k
---
# Data2Vec-Vision (large-sized model, pre-trained only)
BEiT model pre-trained in a self-supervised fashion on ImageNet-1k (1,2 million images, 1000 classes) at resolution 224x224. It was introduced in the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli and first released in [this repository](https://github.com/facebookresearch/data2vec_vision/tree/main/beit).
Disclaimer: The team releasing Facebook team did not write a model card for this model so this model card has been written by the Hugging Face team.
## Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
## Abstract
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
they were developed with a single modality in
mind. To get us closer to general self-supervised
learning, we present data2vec, a framework that
uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data
based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific
targets such as words, visual tokens or units of
human speech which are local in nature, data2vec
predicts contextualized latent representations that
contain information from the entire input. Experiments on the major benchmarks of speech
recognition, image classification, and natural language understanding demonstrate a new state of
the art or competitive performance to predominant approaches.*
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=data2vec-vision) to look for
fine-tuned versions on a task that interests you.
## Training data
The BEiT model was pretrained on [ImageNet-1k](http://www.image-net.org/), a dataset consisting of 1,2 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to the [original paper](https://arxiv.org/abs/2106.08254) and the [original codebase](https://github.com/facebookresearch/data2vec_vision/tree/main/beit)
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1 | 3e34d551c81e585640ad01311df41b56650d3e8a | 2022-06-17T06:34:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh-CN",
"dataset:aishell1",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | qinyue | null | qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1 | 114 | null | transformers | 4,410 | ---
language: zh-CN
datasets:
- aishell1
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 - Chinese (zh-CN), by Yue Qin
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: AISHELL-1 zh-CN
type: aishell1
args: zh-CN
metrics:
- name: Test WER
type: wer
value: 7.04
---
# Wav2Vec2-Large-XLSR-53-Chinese-zh-CN-aishell1
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese using the [AISHELL-1](https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import librosa
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
device = "cuda:0" if torch.cuda.is_available() else "cpu"
processor = Wav2Vec2Processor.from_pretrained(
'qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1')
model = Wav2Vec2ForCTC.from_pretrained(
'qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1').to(device)
filepath = 'test.wav'
audio, sr = librosa.load(filepath, sr=16000, mono=True)
inputs = processor(audio, sample_rate=16000, return_tensors="pt").to(device)
with torch.no_grad():
logits = model(inputs.input_values,
attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
pred_str = processor.decode(predicted_ids[0])
print(pred_str)
```
## Evaluation
```python
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids, spaces_between_special_tokens=True)
label_str = processor.batch_decode(pred.label_ids, group_tokens=False, spaces_between_special_tokens=True)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
```
## Results
| Reference | Prediction |
| ------------- | ------------- |
| 据 伟 业 我 爱 我 家 市 场 研 究 院 测 算 | 据 北 业 我 爱 我 家 市 场 研 究 院 测 算 |
| 七 月 北 京 公 积 金 贷 款 成 交 量 提 升 了 百 分 之 五 | 七 月 北 京 公 积 金 贷 款 成 交 量 提 升 了 百 分 之 五 |
| 培 育 门 类 丰 富 层 次 齐 用 的 综 合 利 用 产 业 | 培 育 门 类 丰 富 层 资 集 业 的 综 合 利 用 产 业 |
| 我 们 迎 来 了 赶 超 发 达 国 家 的 难 得 机 遇 | 我 们 迎 来 了 赶 超 发 达 国 家 的 单 得 机 遇 |
| 坚 持 基 本 草 原 保 护 制 度 | 坚 持 基 本 草 员 保 护 制 度 |
| 强 化 水 生 生 态 修 复 和 建 设 | 强 化 水 生 生 态 修 复 和 建 设 |
| 温 州 两 男 子 为 争 女 人 驾 奔 驰 宝 马 街 头 四 次 对 撞 | 温 州 两 男 子 为 争 女 人 架 奔 驰 宝 马 接 头 四 次 对 重 |
| 她 表 示 应 该 是 吃 吃 饭 看 电 影 之 类 的 | 他 表 示 一 的 是 吃 吃 饭 看 电 影 之 理 |
| 加 强 畜 禽 遗 传 资 源 和 农 业 野 生 植 物 资 源 保 护 | 加 强 续 紧 遗 传 资 源 和 农 业 野 生 职 物 资 源 保 护 |
| 两 人 都 是 依 赖 电 话 沟 通 | 两 人 都 是 依 赖 电 话 沟 通 |
**Test Result**:
In the table below I report the Word Error Rate (WER) of the model on the AISHELL-1 test dataset.
| Model | WER | WER-with-LM |
| ------------- | ------------- | ------------- |
| qinyue/wav2vec2-large-xlsr-53-chinese-zn-cn-aishell1 | **7.04%** | **3.96%** | |
google/ddpm-ema-bedroom-256 | f6c8a04739a77c61b12678f728117be059bf8f98 | 2022-07-21T15:00:25.000Z | [
"diffusers",
"arxiv:2006.11239",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ddpm-ema-bedroom-256 | 114 | null | diffusers | 4,411 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-ema-bedroom-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4.  |
Luyu/bert-base-mdoc-bm25 | b03a0fb2e341e38403bdb7fc8d79ec9ee29fdce3 | 2021-09-22T08:11:56.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:MS MARCO document ranking",
"transformers",
"text reranking",
"license:apache-2.0"
] | text-classification | false | Luyu | null | Luyu/bert-base-mdoc-bm25 | 113 | null | transformers | 4,412 | ---
language:
- en
tags:
- text reranking
license: apache-2.0
datasets:
- MS MARCO document ranking
---
# BERT Reranker for MS-MARCO Document Ranking
## Model description
A text reranker trained for BM25 retriever on MS MARCO document dataset.
## Intended uses & limitations
It is possible to work with other retrievers like but using aligned BM25 works the best.
We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following [this instruction](https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-doc.md).
#### How to use
See our [project repo page](https://github.com/luyug/Reranker).
## Eval results
MRR @10: 0.423 on Dev.
### BibTeX entry and citation info
```bibtex
@inproceedings{gao2021lce,
title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline},
author={Luyu Gao and Zhuyun Dai and Jamie Callan},
year={2021},
booktitle={The 43rd European Conference On Information Retrieval (ECIR)},
}
``` |
MrGentle/DeltaModel-genius1 | 47747a1b8f3809ded25df76214f0e5c6d7fc1e68 | 2021-12-02T19:47:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MrGentle | null | MrGentle/DeltaModel-genius1 | 113 | null | transformers | 4,413 | ---
tags:
- conversational
---
#Delta Chat Model |
jonasmue/cover-letter-gpt2 | b3d9d3e4ee9e3140cc28e026a205ffb65110ffbf | 2021-05-23T06:00:04.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jonasmue | null | jonasmue/cover-letter-gpt2 | 113 | 4 | transformers | 4,414 | Entry not found |
ponmari/Question-Answering | 98bd620222d8d6dabdcb1c01a6a0ce41f26ec935 | 2020-07-21T07:56:56.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ponmari | null | ponmari/Question-Answering | 113 | null | transformers | 4,415 | Entry not found |
PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer | 8f603cb46a32a869104134e1daaed6944ff84867 | 2022-04-12T10:07:56.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"arxiv:1907.11692",
"transformers",
"biomedical",
"clinical",
"eHR",
"spanish",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer | 113 | 1 | transformers | 4,416 | ---
language:
- es
tags:
- biomedical
- clinical
- eHR
- spanish
license: apache-2.0
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
metrics:
- name: f1
type: f1
value: 0.8913
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the PharmaCoNER dataset.
A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed.
For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card.
## Dataset
The dataset used is [PharmaCoNER](https://huggingface.co/datasets/PlanTL-GOB-ES/pharmaconer), a NER dataset annotated with substances, compounds and proteins entities. For further information, check the [official website](https://temu.bsc.es/pharmaconer/).
## Evaluation and results
F1 Score: 0.8913
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Citing
To be announced soon!
## Funding
This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
|
allenai/tk-instruct-11b-def-pos-neg-expl | 9089d1859a862d48f79cbb1043dfd684f850d28b | 2022-05-27T06:31:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/tk-instruct-11b-def-pos-neg-expl | 113 | 1 | transformers | 4,417 | ---
language: en
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
neuralmagic/oBERT-6-upstream-pretrained-dense | d75d5e3bc00c8b8cb370f77b55160be833319903 | 2022-06-20T11:36:52.000Z | [
"pytorch",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-6-upstream-pretrained-dense | 113 | null | null | 4,418 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-6-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to 6 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 6 Layers`.
The model can also be used for finetuning on any downstream task, as a starting point instead of the two times larger `bert-base-uncased` model.
Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
- 0%: `neuralmagic/oBERT-6-downstream-dense-squadv1`
- 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
```
Training objective: masked language modeling (MLM) + knowledge distillation
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 6
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
wvangils/GPT-Medium-Beatles-Lyrics-finetuned-newlyrics | d89e083768039041645fd25c7514adbbae20cc6a | 2022-06-17T11:22:00.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | wvangils | null | wvangils/GPT-Medium-Beatles-Lyrics-finetuned-newlyrics | 113 | null | transformers | 4,419 | ---
tags:
- generated_from_trainer
model-index:
- name: GPT-Medium-Beatles-Lyrics-finetuned-newlyrics
results: []
---
# GPT-Medium-Beatles-Lyrics-finetuned-newlyrics
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the [Cmotions - Beatles lyrics](https://huggingface.co/datasets/cmotions/Beatles_lyrics) dataset. It will complete an input prompt with Beatles-like text.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.603 | 1.0 | 138 | 1.6786 |
| 1.88 | 2.0 | 276 | 1.7219 |
| 1.2914 | 3.0 | 414 | 1.8531 |
| 0.9323 | 4.0 | 552 | 1.9556 |
| 0.7111 | 5.0 | 690 | 2.0697 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
yuningm/bart-large-citesum | 1e8e063fc922fd7cfdc1685748428b4877f85bb9 | 2022-07-08T20:54:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:yuningm/citesum",
"arxiv:2205.06207",
"transformers",
"summarization",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | summarization | false | yuningm | null | yuningm/bart-large-citesum | 113 | 1 | transformers | 4,420 | ---
license: cc-by-nc-4.0
language: en
tags:
- summarization
datasets:
- yuningm/citesum
widget:
- text: "Abstract-This paper presents a control strategy that allows a group of mobile robots to position themselves to optimize the measurement of sensory information in the environment. The robots use sensed information to estimate a function indicating the relative importance of different areas in the environment. Their estimate is then used to drive the network to a desirable placement configuration using a computationally simple decentralized control law. We formulate the problem, provide a practical control solution, and present the results of numerical simulations. We then discuss experiments carried out on a swarm of mobile robots."
example_title: "Networked Robots"
- text: "Abstract. In this paper, a Bayesian method for face recognition is proposed based on Markov Random Fields (MRF) modeling. Constraints on image features as well as contextual relationships between them are explored and encoded into a cost function derived based on a statistical model of MRF. Gabor wavelet coefficients are used as the base features, and relationships between Gabor features at different pixel locations are used to provide higher order contextual constraints. The posterior probability of matching configuration is derived based on MRF modeling. Local search and discriminate analysis are used to evaluate local matches, and a contextual constraint is applied to evaluate mutual matches between local matches. The proposed MRF method provides a new perspective for modeling the face recognition problem. Experiments demonstrate promising results."
example_title: "Bayesian Face Recognition"
- text: "Abstract One of the most relevant applications of digital image forensics is to accurately identify the device used for taking a given set of images, a problem called source identification. This paper studies recent developments in the field and proposes the mixture of two techniques (Sensor Imperfections and Wavelet Transforms) to get better source identification of images generated with mobile devices. Our results show that Sensor Imperfections and Wavelet Transforms can jointly serve as good forensic features to help trace the source camera of images produced by mobile phones. Furthermore, the model proposed here can also determine with high precision both the brand and model of the device."
example_title: "Source identification for mobile devices"
---
# Bart-Large CiteSum (Sentences)
This is facebook/bart-large fine-tuned on CiteSum.
The "src" column is the input and the "tgt" column is the target summarization.
## Authors
### Yuning Mao, Ming Zhong, Jiawei Han
#### University of Illinois Urbana-Champaign
{yuningm2, mingz5, hanj}@illinois.edu
## Results
```
{
"epoch": 5.28,
"eval_gen_len": 37.0464,
"eval_loss": 2.058537483215332,
"eval_rouge1": 41.3415,
"eval_rouge2": 19.2246,
"eval_rougeL": 33.3258,
"eval_rougeLsum": 33.5075,
"eval_runtime": 697.7289,
"eval_samples": 4721,
"eval_samples_per_second": 6.766,
"eval_steps_per_second": 0.847,
"predict_gen_len": 37.0159,
"predict_loss": 2.0521159172058105,
"predict_rouge1": 41.9288,
"predict_rouge2": 19.5963,
"predict_rougeL": 33.7098,
"predict_rougeLsum": 33.9124,
"predict_runtime": 718.1231,
"predict_samples": 4921,
"predict_samples_per_second": 6.853,
"predict_steps_per_second": 0.858,
"train_loss": 1.7884394331498579,
"train_runtime": 23049.0303,
"train_samples": 83304,
"train_samples_per_second": 69.417,
"train_steps_per_second": 8.677
}
```
## Dataset Description
CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation.
CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR.
## Homepage
https://github.com/morningmoni/CiteSum
## Paper
https://arxiv.org/abs/2205.06207
## Dataset on Hub
https://huggingface.co/datasets/nbroad/citesum
## How to use model
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="yuningm/bart-large-citesum")
article = ''' We describe a convolutional neural network that learns\
feature representations for short textual posts using hashtags as a\
supervised signal. The proposed approach is trained on up to 5.5 \
billion words predicting 100,000 possible hashtags. As well as strong\
performance on the hashtag prediction task itself, we show that its \
learned representation of text (ignoring the hashtag labels) is useful\
for other tasks as well. To that end, we present results on a document\
recommendation task, where it also outperforms a number of baselines.
'''
summarizer(article)
# [{'summary_text': 'REF proposed a convolutional neural network
# that learns feature representations for short textual posts
# using hashtags as a supervised signal.'}]
```
|
nielsr/donut-base-finetuned-docvqa | c677933f9efa82eb6d72ba6acd0e1cece060e483 | 2022-07-26T09:47:02.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | nielsr | null | nielsr/donut-base-finetuned-docvqa | 113 | null | transformers | 4,421 | Entry not found |
DeepChem/ChemBERTa-5M-MLM | 695672a601a8077ace5d0a6fc2529e93cc9cfa9c | 2022-01-20T17:59:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DeepChem | null | DeepChem/ChemBERTa-5M-MLM | 112 | null | transformers | 4,422 | Entry not found |
NDugar/debertav3-mnli-snli-anli | 818713e9f926ab18a0377103520ab6af073517d4 | 2021-11-29T20:56:42.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
] | zero-shot-classification | false | NDugar | null | NDugar/debertav3-mnli-snli-anli | 112 | 3 | transformers | 4,423 | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` |
avichr/hebEMO_trust | 8bca4dfbf5dc6975d3e0cf24c76ac32e31df3eac | 2022-04-15T09:36:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_trust | 112 | null | transformers | 4,424 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
dbmdz/electra-small-turkish-cased-discriminator | 1c70896935f1be63be7f30ac3faf9b9d417e8c2c | 2020-12-11T21:37:29.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"transformers",
"license:mit"
] | null | false | dbmdz | null | dbmdz/electra-small-turkish-cased-discriminator | 112 | null | transformers | 4,425 | ---
language: tr
license: mit
---
# 🤗 + 📚 dbmdz Turkish ELECTRA model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA small model for Turkish 🎉
# Turkish ELECTRA model
We release a small ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*.
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
## Model weights
[Transformers](https://github.com/huggingface/transformers)
compatible weights for both PyTorch and TensorFlow are available.
| Model | Downloads
| ------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/electra-small-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/vocab.txt)
## Usage
With Transformers >= 2.8 our ELECTRA small cased model can be loaded like:
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-small-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-small-turkish-cased-discriminator")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert/electra).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
facebook/detr-resnet-50-dc5-panoptic | a173bbe3e341ee79acc040e59f03ec0efeb2dd17 | 2021-11-04T17:49:03.000Z | [
"pytorch",
"detr",
"image-segmentation",
"dataset:coco",
"arxiv:2005.12872",
"transformers",
"license:apache-2.0"
] | image-segmentation | false | facebook | null | facebook/detr-resnet-50-dc5-panoptic | 112 | 1 | transformers | 4,426 | ---
license: apache-2.0
tags:
- image-segmentation
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg
example_title: Dog & Cat
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/construction-site.jpg
example_title: Construction Site
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/apple-orange.jpg
example_title: Apple & Orange
---
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.
## Intended uses & limitations
You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForSegmentation
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5-panoptic')
model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-dc5-panoptic')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts COCO classes, bounding boxes, and masks
logits = outputs.logits
bboxes = outputs.pred_boxes
masks = outputs.pred_masks
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.2**, a segmentation AP (average precision) of **31.9** and a PQ (panoptic quality) of **44.6**.
For more details regarding evaluation results, we refer to table 5 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
google/tapas-large | 879ccbedad111da5c324a234b6b719b2b9e5bafa | 2021-11-29T10:18:23.000Z | [
"pytorch",
"tf",
"tapas",
"feature-extraction",
"en",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"TapasModel",
"license:apache-2.0"
] | feature-extraction | false | google | null | google/tapas-large | 112 | null | transformers | 4,427 | ---
language: en
tags:
- tapas
- TapasModel
license: apache-2.0
---
# TAPAS large model
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `revision="no_reset"`, which corresponds to `tapas_inter_masklm_large`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
jointly train these randomly initialized classification heads with the base model on a downstream task.
## Intended uses & limitations
You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Pre-training
The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details.
The optimizer used is Adam with a learning rate of 5e-5, and a warmup
ratio of 0.01.
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
uclanlp/plbart-multi_task-python | aa3bcd034d63011cf66243922b08920a6dbe7a3e | 2022-03-02T07:31:29.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-python | 112 | null | transformers | 4,428 | Entry not found |
voidful/albert_chinese_xxlarge | eabcac5c4cfac10e44c6d6b71ef6a22f4e4137b6 | 2021-08-03T05:07:41.000Z | [
"pytorch",
"albert",
"fill-mask",
"zh",
"transformers",
"autotrain_compatible"
] | fill-mask | false | voidful | null | voidful/albert_chinese_xxlarge | 112 | 2 | transformers | 4,429 | ---
language: zh
pipeline_tag: fill-mask
widget:
- text: "今天[MASK]情很好"
---
# albert_chinese_xxlarge
This a albert_chinese_xxlarge model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Notice
*Support AutoTokenizer*
Since sentencepiece is not used in albert_chinese_base model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_base 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
```python
from transformers import AutoTokenizer, AlbertForMaskedLM
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_xxlarge'
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos],dim=-1).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token, logit_prob[predicted_index])
```
Result: `心 0.995713472366333`
|
josh-oo/german-gpt2-easy | 876af797a29ba247774e29660e15282242320f44 | 2022-07-20T11:32:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"de",
"transformers"
] | text-generation | false | josh-oo | null | josh-oo/german-gpt2-easy | 112 | null | transformers | 4,430 | ---
language:
- de
widget:
- text: "Der Sinn des Lebens ist"
example_title: "Sinn des Lebens"
inference:
parameters:
temperature: 0.7
repetition_penalty: 1.4
--- |
nlokam/adanimals_V1 | 46ddd7e6b5eb8eb9edc75ea009d32e971f5be21c | 2022-06-12T18:02:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | nlokam | null | nlokam/adanimals_V1 | 112 | null | transformers | 4,431 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
--- |
Evelyn18/roberta-base-spanish-squades-becasv3 | e63052bebd1f536c4e18cdba5e4319585a8bbb89 | 2022-07-25T02:46:16.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-becasv3 | 112 | null | transformers | 4,432 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasv3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo-robertav3
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 1.75 |
| No log | 2.0 | 10 | 0.532 |
| No log | 3.0 | 15 | 0.0925 |
| No log | 4.0 | 20 | 0.0286 |
| No log | 5.0 | 25 | 0.0111 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1 |
dbmdz/electra-base-turkish-mc4-uncased-discriminator | bd4d8062c25f3f0c699e451f2f6b07df826e83e4 | 2021-09-23T10:43:12.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"dataset:allenai/c4",
"transformers",
"license:mit"
] | null | false | dbmdz | null | dbmdz/electra-base-turkish-mc4-uncased-discriminator | 111 | 2 | transformers | 4,433 | ---
language: tr
license: mit
datasets:
- allenai/c4
---
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (uncased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("electra-base-turkish-mc4-uncased-discriminator")
model = AutoModel.from_pretrained("electra-base-turkish-mc4-uncased-discriminator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️ |
jimypbr/bert-base-uncased-squad | bd8f5357b298a87247fb0cd991d52ccc6008fc52 | 2022-02-11T22:28:31.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | jimypbr | null | jimypbr/bert-base-uncased-squad | 111 | null | transformers | 4,434 | ---
license: apache-2.0
---
# BERT-Base Uncased SQuADv1
`bert-base-uncased` trained on question answering with `squad`.
Evalulation scores:
```
***** eval metrics *****
epoch = 3.0
eval_exact_match = 80.6906
eval_f1 = 88.1129
eval_samples = 10784
``` |
sdadas/polish-roberta-large-v2 | e7e302a75f230f10ae8673560e7b3451745cc9e4 | 2022-02-19T10:25:37.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:lgpl-3.0",
"autotrain_compatible"
] | fill-mask | false | sdadas | null | sdadas/polish-roberta-large-v2 | 111 | 1 | transformers | 4,435 | ---
license: lgpl-3.0
---
|
textattack/distilbert-base-cased-SST-2 | c2e8de47524988a4a0bd47f3b2ee6a65912617e6 | 2020-06-09T16:46:25.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-cased-SST-2 | 111 | null | transformers | 4,436 | Entry not found |
wietsedv/bert-base-dutch-cased-finetuned-sonar-ner | 7d3e7d3e2a7d79f848d0249f342df7f26f4c6678 | 2021-05-20T09:09:52.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/bert-base-dutch-cased-finetuned-sonar-ner | 111 | null | transformers | 4,437 | Entry not found |
9pinus/macbert-base-chinese-medicine-recognition | 3a20f1d5d353ee11ce93f9ca885b3d7d859eb33e | 2022-03-02T09:20:41.000Z | [
"pytorch",
"bert",
"token-classification",
"zh",
"transformers",
"Token Classification",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | 9pinus | null | 9pinus/macbert-base-chinese-medicine-recognition | 111 | 2 | transformers | 4,438 | ---
license: apache-2.0
tags:
- Token Classification
language:
- zh
---
## Model description
This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.
## Intended use
```python
>>> from transformers import (AutoModelForTokenClassification, AutoTokenizer)
>>> from transformers import pipeline
>>> hub_model_id = "9pinus/macbert-base-chinese-medicine-recognition"
>>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id)
>>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id)
>>> classifier = pipeline('ner', model=model, tokenizer=tokenizer)
>>> result = classifier("如果病情较重,可适当口服甲硝唑片、环酯红霉素片、吲哚美辛片等药物进行抗感染镇痛。")
>>> for item in result:
>>> if item['entity'] == 1 or item['entity'] == 2:
>>> print(item)
{'entity': 1, 'score': 0.99999595, 'index': 13, 'word': '甲', 'start': 12, 'end': 13}
{'entity': 2, 'score': 0.9999957, 'index': 14, 'word': '硝', 'start': 13, 'end': 14}
{'entity': 2, 'score': 0.99999166, 'index': 15, 'word': '唑', 'start': 14, 'end': 15}
{'entity': 2, 'score': 0.99898833, 'index': 16, 'word': '片', 'start': 15, 'end': 16}
{'entity': 1, 'score': 0.9999864, 'index': 18, 'word': '环', 'start': 17, 'end': 18}
{'entity': 2, 'score': 0.99999404, 'index': 19, 'word': '酯', 'start': 18, 'end': 19}
{'entity': 2, 'score': 0.99999475, 'index': 20, 'word': '红', 'start': 19, 'end': 20}
{'entity': 2, 'score': 0.9999964, 'index': 21, 'word': '霉', 'start': 20, 'end': 21}
{'entity': 2, 'score': 0.9999951, 'index': 22, 'word': '素', 'start': 21, 'end': 22}
{'entity': 2, 'score': 0.9990088, 'index': 23, 'word': '片', 'start': 22, 'end': 23}
{'entity': 1, 'score': 0.9999975, 'index': 25, 'word': '吲', 'start': 24, 'end': 25}
{'entity': 2, 'score': 0.9999957, 'index': 26, 'word': '哚', 'start': 25, 'end': 26}
{'entity': 2, 'score': 0.9999945, 'index': 27, 'word': '美', 'start': 26, 'end': 27}
{'entity': 2, 'score': 0.9999933, 'index': 28, 'word': '辛', 'start': 27, 'end': 28}
{'entity': 2, 'score': 0.99949837, 'index': 29, 'word': '片', 'start': 28, 'end': 29}
```
## Training and evaluation data
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
JiachengLi/uctopic-base | 5cc265b0e32e7b5df6ab0a330b4621ad416c4f84 | 2022-06-11T21:13:05.000Z | [
"pytorch",
"luke",
"arxiv:2202.13469",
"arxiv:2010.01057",
"transformers",
"license:mit"
] | null | false | JiachengLi | null | JiachengLi/uctopic-base | 111 | 0 | transformers | 4,439 | ---
license: mit
---
# UCTopic
This repository contains the code of model UCTopic and an easy-to-use tool UCTopicTool used for <strong>Topic Mining</strong>, <strong>Unsupervised Aspect Extractioin</strong> or <strong>Phrase Retrieval</strong>.
Our ACL 2022 paper [UCTopic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining](https://arxiv.org/abs/2202.13469).
# Quick Links
- [Overview](#overview)
- [Pretrained Model](#pretrained-model)
- [Getting Started](#getting-started)
- [UCTopic Model](#uctopic-model)
- [UCTopicTool](#uctopictool)
- [Experiments in Paper](#experiments)
- [Requirements](#requirements)
- [Datasets](#datasets)
- [Entity Clustering](#entity-clustering)
- [Topic Mining](#topic-mining)
- [Pretraining](#pretraining)
- [Contact](#contact)
- [Citation](#citation)
# Overview
We propose UCTopic, a novel unsupervised contrastive learning framework for context-aware phrase representations and topic mining. UCTopic is pretrained in a large scale to distinguish if the contexts of two phrase mentions have the same semantics. The key to pretraining is positive pair construction from our phrase-oriented assumptions. However, we find traditional in-batch negatives cause performance decay when finetuning on a dataset with small topic numbers. Hence, we propose cluster-assisted contrastive learning(CCL) which largely reduces noisy negatives by selecting negatives from clusters and further improves phrase representations for topics accordingly.
# Pretrained Model
Our released model:
| Model | Note|
|:-------------------------------|------|
|[uctopic-base](https://drive.google.com/file/d/1XQzi4E9ctdI373CK5O-pXQyBvOONssp1/view?usp=sharing)| Pretrained UCTopic model based on [LUKE-BASE](https://arxiv.org/abs/2010.01057)
Unzip to get `uctopic-base` folder.
# Getting Started
We provide an easy-to-use phrase representation tool based on our UCTopic model. To use the tool, first install the uctopic package from PyPI
```bash
pip install uctopic
```
Or directly install it from our code
```bash
python setup.py install
```
## UCTopic Model
After installing the package, you can load our model by just two lines of code
```python
from uctopic import UCTopic
model = UCTopic.from_pretrained('JiachengLi/uctopic-base')
```
The model will automatically download pre-trained parameters from [HuggingFace's models](https://huggingface.co/models). If you encounter any problem when directly loading the models by HuggingFace's API, you can also download the models manually from the above table and use `model = UCTopic.from_pretrained({PATH TO THE DOWNLOAD MODEL})`.
To get pre-trained <strong>phrase representations</strong>, our model inputs are same as [LUKE](https://huggingface.co/docs/transformers/model_doc/luke). Note: please input only <strong>ONE</strong> span each time, otherwise, will have performance decay according to our empirical results.
```python
from uctopic import UCTopicTokenizer, UCTopic
tokenizer = UCTopicTokenizer.from_pretrained('JiachengLi/uctopic-base')
model = UCTopic.from_pretrained('JiachengLi/uctopic-base')
text = "Beyoncé lives in Los Angeles."
entity_spans = [(17, 28)] # character-based entity span corresponding to "Los Angeles"
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs, phrase_repr = model(**inputs)
```
`phrase_repr` is the phrase embedding (size `[768]`) of the phrase `Los Angeles`. `outputs` has the same format as the outputs from `LUKE`.
## UCTopicTool
We provide a tool `UCTopicTool` built on `UCTopic` for efficient phrase encoding, topic mining (or unsupervised aspect extraction) or phrase retrieval.
### Initialization
`UCTopicTool` is initialized by giving the `model_name_or_path` and `device`.
```python
from uctopic import UCTopicTool
topic_tool = UCTopicTool('JiachengLi/uctopic-base', device='cuda:0')
```
### Phrase Encoding
Phrases are encoded by our method `UCTopicTool.encode` in batches, which is more efficient than `UCTopic`.
```python
phrases = [["This place is so much bigger than others!", (0, 10)],
["It was totally packed and loud.", (15, 21)],
["Service was on the slower side.", (0, 7)],
["I ordered 2 mojitos: 1 lime and 1 mango.", (12, 19)],
["The ingredient weren't really fresh.", (4, 14)]]
embeddings = topic_tool.encode(phrases) # len(embeddings) is equal to len(phrases)
```
**Note**: Each instance in `phrases` contains only one sentence and one span (character-level position) in format `[sentence, span]`.
Arguments for `UCTopicTool.encode` are as follows,
* **phrase** (List) - A list of `[sentence, span]` to be encoded.
* **return_numpy** (bool, *optional*, defaults to `False`) - Return `numpy.array` or `torch.Tensor`.
* **normalize_to_unit** (bool, *optional*, defaults to `True`) - Normalize all embeddings to unit vectors.
* **keepdim** (bool, *optional*, defaults to `True`) - Keep dimension size `[instance_number, hidden_size]`.
* **batch_size** (int, *optional*, defaults to `64`) - The size of mini-batch in the model.
### Topic Mining and Unsupervised Aspect Extraction
The method `UCTopicTool.topic_mining` can mine topical phrases or conduct aspect extraction from sentences with or without spans.
```python
sentences = ["This place is so much bigger than others!",
"It was totally packed and loud.",
"Service was on the slower side.",
"I ordered 2 mojitos: 1 lime and 1 mango.",
"The ingredient weren't really fresh."]
spans = [[(0, 10)], # This place
[(15, 21), (26, 30)], # packed; loud
[(0, 7)], # Service
[(12, 19), (21, 27), (32, 39)], # mojitos; 1 lime; 1 mango
[(4, 14)]] # ingredient
# len(sentences) is equal to len(spans)
output_data, topic_phrase_dict = tool.topic_mining(sentences, spans, \
n_clusters=[15, 25])
# predict topic for new phrases
phrases = [["The food here is amazing!", (4, 8)],
["Lovely ambiance with live music!", (21, 31)]]
topics = tool.predict_topic(phrases)
```
**Note**: If `spans` is not given, `UCTopicTool` will extract noun phrases by [spaCy](https://spacy.io/).
Arguments for `UCTopicTool.topic_mining` are as follows,
Data arguments:
* **sentences** (List) - A List of sentences for topic mining.
* **spans** (List, *optional*, defaults to `None`) - A list of span list corresponding sentences, e.g., `[[(0, 9), (5, 7)], [(1, 2)]]` and `len(sentences)==len(spans)`. If None, automatically mine phrases from noun chunks.
Clustering arguments:
* **n_clusters** (int or List, *optional*, defaults to `2`) - The number of topics. When `n_clusters` is a list, `n_clusters[0]` and `n_clusters[1]` will be the minimum and maximum numbers to search, `n_clusters[2]` is the search step length (if not provided, default to 1).
* **meric** (str, *optional*, defaults to `"cosine"`) - The metric to measure the distance between vectors. `"cosine"` or `"euclidean"`.
* **batch_size** (int, *optional*, defaults to `64`) - The size of mini-batch for phrase encoding.
* **max_iter** (int, *optional*, defaults to `300`) - The maximum iteration number of kmeans.
CCL-finetune arguments:
* **ccl_finetune** (bool, *optional*, defaults to `True`) - Whether to conduct CCL-finetuning in the paper.
* **batch_size_finetune** (int, *optional*, defaults to `8`) - The size of mini-batch for finetuning.
* **max_finetune_num** (int, *optional*, defaults to `100000`) - The maximum number of training instances for finetuning.
* **finetune_step** (int, *optional*, defaults to `2000`) - The number of training steps for finetuning.
* **contrastive_num** (int, *optional*, defaults to `5`) - The number of negatives in contrastive learning.
* **positive_ratio** (float, *optional*, defaults to `0.1`) - The ratio of the most confident instances for finetuning.
* **n_sampling** (int, *optional*, defaults to `10000`) - The number of sampled examples for cluster number confirmation and finetuning. Set to `-1` to use the whole dataset.
* **n_workers** (int, *optional*, defaults to `8`) - The number of workers for preprocessing data.
Returns for `UCTopicTool.topic_mining` are as follows,
* **output_data** (List) - A list of sentences and corresponding phrases and topic numbers. Each element is `[sentence, [[start1, end1, topic1], [start2, end2, topic2]]]`.
* **topic_phrase_dict** (Dict) - A dictionary of topics and the list of phrases under a topic. The phrases are sorted by their confidence scores. E.g., `{topic: [[phrase1, score1], [phrase2, score2]]}`.
The method `UCTopicTool.predict_topic` predicts the topic ids for new phrases based on your training results from `UCTopicTool.topic_mining`. The inputs of `UCTopicTool.predict_topic` are same as `UCTopicTool.encode` and returns a list of topic ids (int).
### Phrase Similarities and Retrieval
The method `UCTopicTool.similarity` compute the cosine similarities between two groups of phrases:
```python
phrases_a = [["This place is so much bigger than others!", (0, 10)],
["It was totally packed and loud.", (15, 21)]]
phrases_b = [["Service was on the slower side.", (0, 7)],
["I ordered 2 mojitos: 1 lime and 1 mango.", (12, 19)],
["The ingredient weren't really fresh.", (4, 14)]]
similarities = tool.similarity(phrases_a, phrases_b)
```
Arguments for `UCTopicTool.similarity` are as follows,
* **queries** (List) - A list of `[sentence, span]` as queries.
* **keys** (List or `numpy.array`) - A list of `[sentence, span]` as keys or phrase representations (`numpy.array`) from `UCTopicTool.encode`.
* **batch_size** (int, *optional*, defaults to `64`) - The size of mini-batch in the model.
`UCTopicTool.similarity` returns a `numpy.array` contains the similarities between phrase pairs in two groups.
The methods `UCTopicTool.build_index` and `UCTopicTool.search` are used for phrase retrieval:
```python
phrases = [["This place is so much bigger than others!", (0, 10)],
["It was totally packed and loud.", (15, 21)],
["Service was on the slower side.", (0, 7)],
["I ordered 2 mojitos: 1 lime and 1 mango.", (12, 19)],
["The ingredient weren't really fresh.", (4, 14)]]
# query multiple phrases
query1 = [["The food here is amazing!", (4, 8)],
["Lovely ambiance with live music!", (21, 31)]]
# query single phrases
query2 = ["The food here is amazing!", (4, 8)]
tool.build_index(phrases)
results = tool.search(query1, top_k=3)
# or
results = tool.search(query2, top_k=3)
```
We also support [faiss](https://github.com/facebookresearch/faiss), an efficient similarity search library. Just install the package following [instructions](https://github.com/facebookresearch/faiss/blob/main/INSTALL.md) here and `UCTopicTool` will automatically use `faiss` for efficient search.
`UCTopicTool.search` returns the ranked top k phrases for each query.
### Save and Load finetuned UCTopicTool
The methods `UCTopicTool.save` and `UCTopicTool.load` are used for save and load all paramters of `UCTopicTool`.
Save:
```python
tool = UCTopicTool('JiachengLi/uctopic-base', 'cuda:0')
# finetune UCTopic with CCL
output_data, topic_phrase_dict = tool.topic_mining(sentences, spans, \
n_clusters=[15, 25])
tool.save(**your directory**)
```
Load:
```python
tool = UCTopicTool('JiachengLi/uctopic-base', 'cuda:0')
tool.load(**your directory**)
```
The loaded parameters will be used for all methods (for encoding, topic mining, phrase similarities and retrieval) introduced above.
# Experiments
In this section, we re-implement experiments in our paper.
## Requirements
First, install PyTorch by following the instructions from [the official website](https://pytorch.org). To faithfully reproduce our results, please use the correct `1.9.0` version corresponding to your platforms/CUDA versions.
Then run the following script to install the remaining dependencies,
```bash
pip install -r requirements.txt
```
Download `en_core_web_sm` model from spacy,
```bash
python -m spacy download en_core_web_sm
```
## Datasets
The downstream datasets used in our experiments can be downloaded from [here](https://drive.google.com/file/d/1dVIp9li1Wdh0JgU8slsWm0ObcitbQtSL/view?usp=sharing).
## Entity Clustering
The config file of entity clustering is `clustering/consts.py` and most arguments are self-explained. Please setup `--gpu` and `--data_path` before running. The clustering scores will be printed.
Clustering with our pre-trained phrase embeddings.
```bash
python clustering.py --gpu 0
```
Clustering with our pre-trained phrase embeddings and Cluster-Assisted Constrastive Learning (CCL) proposed in our paper.
```bash
python clustering_ccl_finetune.py --gpu 0
```
## Topic Mining
The config file of entity clustering is `topic_modeling/consts.py`.
**Key Argument Table**
| Arguments | Description |
|:-----------------|:-----------:|
| --num_classes |**Min** and **Max** number of classes, e.g., `[5, 15]`. Our model will find the class number by [silhouette_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html).|
| --sample_num_cluster |Number of sampled phrases to confirm class number.|
| --sample_num_finetune|Number of sampled phrases for CCL finetuning.|
| --contrastive_num|Number of negative classes for CCL finetuning.|
| --finetune_step | CCL finetuning steps (maximum global steps for finetuning).|
**Tips**: Please tune `--batch_size` or `--contrastive_num` for suitable GPU memory usage.
Topic mining with our pre-trained phrase embeddings and Cluster-Assisted Constrastive Learning (CCL) proposed in our paper.
```bash
python find_topic.py --gpu 0
```
**Outputs**
We output three files under `topic_results`:
| File Name | Description |
|:-----------------|:-----------:|
| `merged_phraes_pred_prob.pickle` |A dictionary of phrases and their topic number and prediction probability. A topic of a phrase is merged from all phrase mentioins. `{phrase: [topic_id, probability]}`, e.g., {'fair prices': [0, 0.34889686]}|
| `phrase_instances_pred.json`| A list of all mined phrase mentions. Each element is `[[doc_id, start, end, phrase_mention], topic_id]`.|
| `topics_phrases.json`|A dictionary of topics and corresponding phrases sorted by probability. `{'topic_id': [[phrase1, prob1], [phrase2, prob2]]}`|
### Pretraining
**Data**
For unsupervised pretraining of UCTopic, we use article and span with links from English Wikipedia and Wikidata. Our processed dataset can be downloaded from [here](https://drive.google.com/file/d/1wflsmhPI9J0ZA6aVRl2mQjHIE6JIvzAv/view?usp=sharing).
**Training scripts**
We provide example training scripts and our default training parameters for unsupervised training of UCTopic in `run_example.sh`.
```bash
bash run_example.sh
```
Arguments description can be found in `pretrain.py`. All the other arguments are standard Huggingface's `transformers` training arguments.
**Convert models**
Our pretrained checkpoints are slightly different from the checkpoint `uctopic-base`. Please refer `convert_uctopic_parameters.py` to convert it.
# Contact
If you have any questions related to the code or the paper, feel free to email Jiacheng (`[email protected]`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
# Citation
Please cite our paper if you use UCTopic in your work:
```bibtex
@article{Li2022UCTopicUC,
title={UCTopic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining},
author={Jiacheng Li and Jingbo Shang and Julian McAuley},
journal={ArXiv},
year={2022},
volume={abs/2202.13469}
}
``` |
wonrax/phobert-base-vietnamese-sentiment | 9076a5896971b5d551588fe8a51c722c89731d36 | 2022-05-04T07:30:54.000Z | [
"pytorch",
"roberta",
"text-classification",
"vi",
"transformers",
"sentiment",
"classification",
"license:mit"
] | text-classification | false | wonrax | null | wonrax/phobert-base-vietnamese-sentiment | 111 | null | transformers | 4,440 | ---
language:
- vi
tags:
- sentiment
- classification
license: mit
widget:
- text: "Không thể nào đẹp hơn"
- text: "Quá phí tiền, mà không đẹp"
- text: "Cái này giá ổn không nhỉ?"
---
[**GitHub Homepage**](https://github.com/wonrax/phobert-base-vietnamese-sentiment)
A model fine-tuned for sentiment analysis based on [vinai/phobert-base](https://huggingface.co/vinai/phobert-base).
Labels:
- NEG: Negative
- POS: Positive
- NEU: Neutral
Dataset: [30K e-commerce reviews](https://www.kaggle.com/datasets/linhlpv/vietnamese-sentiment-analyst)
## Usage
```python
import torch
from transformers import RobertaForSequenceClassification, AutoTokenizer
model = RobertaForSequenceClassification.from_pretrained("wonrax/phobert-base-vietnamese-sentiment")
tokenizer = AutoTokenizer.from_pretrained("wonrax/phobert-base-vietnamese-sentiment", use_fast=False)
# Just like PhoBERT: INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
sentence = 'Đây là mô_hình rất hay , phù_hợp với điều_kiện và như cầu của nhiều người .'
input_ids = torch.tensor([tokenizer.encode(sentence)])
with torch.no_grad():
out = model(input_ids)
print(out.logits.softmax(dim=-1).tolist())
# Output:
# [[0.002, 0.988, 0.01]]
# ^ ^ ^
# NEG POS NEU
```
|
eslamxm/vit-base-food101 | 51da4b10097cf3a7182c77035a8aac43927f0c89 | 2022-05-19T05:23:57.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:food101",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | eslamxm | null | eslamxm/vit-base-food101 | 111 | null | transformers | 4,441 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: vit-base-food101-demo-v5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8558811881188119
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-food101-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5434
- Accuracy: 0.8559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6283 | 1.0 | 4735 | 0.9875 | 0.7409 |
| 0.9874 | 2.0 | 9470 | 0.7967 | 0.7894 |
| 0.7102 | 3.0 | 14205 | 0.6455 | 0.8255 |
| 0.4917 | 4.0 | 18940 | 0.5502 | 0.8524 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Wi/arxiv-distilbert-base-cased | 001bb3a6cbd6b574a31aa58576655f247ebb54a9 | 2022-06-14T17:37:18.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:arxiv_dataset",
"transformers",
"license:apache-2.0"
] | text-classification | false | Wi | null | Wi/arxiv-distilbert-base-cased | 111 | 1 | transformers | 4,442 | ---
license: apache-2.0
language:
- en
datasets:
- arxiv_dataset
tags:
- distilbert
---
# DistilBERT ArXiv Category Classification
DistilBERT model fine-tuned on a small subset of the [ArXiv dataset](https://www.kaggle.com/datasets/Cornell-University/arxiv) to predict the category of a given paper.
|
Harveenchadha/hindi_model_with_lm_vakyansh | 73d3becb70711a6df190b4317c57f58a46a7b89d | 2022-03-23T18:25:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:Harveenchadha/indic-voice",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/hindi_model_with_lm_vakyansh | 110 | 1 | transformers | 4,443 | ---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- hi
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- Harveenchadha/indic-voice
model-index:
- name: Hindi Large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 19.14
- name: Test CER
type: cer
value: 5.93
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 17.4
- name: Test CER
type: cer
value: 7.13
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-8.0
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 18.99
- name: Test CER
type: cer
value: 8.91
---
|
aware-ai/marian-german-grammar | a180bb1ec7eda2403676fcb4375837d6d7898206 | 2021-06-13T14:56:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | aware-ai | null | aware-ai/marian-german-grammar | 110 | null | transformers | 4,444 | Entry not found |
lhoestq/distilbert-base-uncased-finetuned-absa-as | d90f5cebc590c8b7695a7105bab274528f935e77 | 2021-04-29T14:22:37.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | lhoestq | null | lhoestq/distilbert-base-uncased-finetuned-absa-as | 110 | 1 | transformers | 4,445 | Distilbert finetuned for Aspect-Based Sentiment Analysis (ABSA) with auxiliary sentence.
```bibtex
@inproceedings{sun-etal-2019-utilizing,
title = "Utilizing {BERT} for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence",
author = "Sun, Chi and
Huang, Luyao and
Qiu, Xipeng",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N19-1035",
doi = "10.18653/v1/N19-1035",
pages = "380--385",
abstract = "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets. The source codes are available at https://github.com/HSLCY/ABSA-BERT-pair.",
}
``` |
mrm8488/bert2bert-medium_shared-question-generation | ff0c2a50e60d82c42d453a0a51926636708ea940 | 2020-12-27T20:27:57.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert-medium_shared-question-generation | 110 | null | transformers | 4,446 | Entry not found |
mrm8488/mT5-small-finetuned-tydiqa-for-xqa | b6ade12a4982eff1beca9002dfa9f0662b663e7c | 2021-08-23T21:32:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"multilingual",
"dataset:tydiqa",
"arxiv:2010.11934",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/mT5-small-finetuned-tydiqa-for-xqa | 110 | 1 | transformers | 4,447 | ---
language: multilingual
datasets:
- tydiqa
widget:
- text: "question: What won HuggingFace? context: HuggingFace won the best Demo paper at EMNLP2020."
---
# mT5-small fine-tuned on TyDiQA for multilingual QA 🗺📖❓
[Google's mT5-small](https://huggingface.co/google/mt5-small) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Q&A** downstream task.
## Details of mT5
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Details of the dataset 📚
**TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).
| Dataset | Task | Split | # samples |
| -------- | ----- |------| --------- |
| TyDi QA | GoldP | train| 49881 |
| TyDi QA | GoldP | valid| 5077 |
## Results on validation dataset 📝
| Metric | # Value |
| ------ | --------- |
| **EM** | **41.65** |
## Model in Action 🚀
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa")
model = AutoModelForCausalLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa").to(device)
def get_response(question, context, max_length=32):
input_text = 'question: %s context: %s' % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'].to(device),
attention_mask=features['attention_mask'].to(device),
max_length=max_length)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Some examples in different languages
context = 'HuggingFace won the best Demo paper at EMNLP2020.'
question = 'What won HuggingFace?'
get_response(question, context)
context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.'
question = 'Qué ganó HuggingFace?'
get_response(question, context)
context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.'
question = 'Что победило в HuggingFace?'
get_response(question, context)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
valhalla/distilt5-qg-hl-6-4 | 18ea32a868c0dc7b270530869829d0eaeb77ab03 | 2021-09-23T16:42:52.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:squad",
"transformers",
"question-generation",
"distilt5",
"distilt5-qg",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/distilt5-qg-hl-6-4 | 110 | null | transformers | 4,448 | ---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: <hl> 42 <hl> is the answer to life, the universe and everything. </s>
- text: Python is a programming language. It is developed by <hl> Guido Van Rossum
<hl>. </s>
- text: Although <hl> practicality <hl> beats purity </s>
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-small-qa-qg-hl](https://huggingface.co/valhalla/t5-small-qa-qg-hl) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-small-qa-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens. For example
`<hl> 42 <hl> is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/distilt5-qg-hl-6-4")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life?'}]
``` |
victor/animals-classifier | d7455abe9100505c9fbff54f9bcbb5c2c12e95d9 | 2021-06-29T16:03:03.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | victor | null | victor/animals-classifier | 110 | null | transformers | 4,449 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: animals-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9821428656578064
---
# animals-classifier
Autogenerated by HuggingPics🤗🖼️
 |
MU-NLPC/CzeGPT-2 | 35512dbd48f7579f67cfca1b4deba0c13e2e2aae | 2022-04-04T19:49:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"cs",
"dataset:csTenTen17",
"transformers",
"license:cc-by-nc-sa-4.0"
] | text-generation | false | MU-NLPC | null | MU-NLPC/CzeGPT-2 | 110 | 0 | transformers | 4,450 | ---
language: cs
license: cc-by-nc-sa-4.0
datasets:
- csTenTen17
---
# CzeGPT-2
CzeGPT-2 is a Czech version of GPT-2 language model by OpenAI with LM Head on top. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124 M trainable parameters. It was trained on a 5 GB slice of cleaned csTenTen17 dataset.
The model is a good building block for any down-stream task requiring autoregressive text generation.
# Tokenizer
Along, we also provide a tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase. It is the byte-level BPE tokenizer used in the original paper and was trained on the whole 5 GB train set.
# Training results
The model's perplexity on a 250 MB random slice of csTenTen17 dataset is **42.12**. This value is unfortunately not directly comparable to any other model, since there is no competition in Czech autoregressive models yet (and comparison with models for other languages is meaningless, because of different tokenization and test data).
# Running the predictions
The repository includes a simple Jupyter Notebook that can help with the first steps when using the model. (TODO)
## How to cite
@unpublished{hajek_horak2022,<br>
author = "Adam Hájek and Aleš Horák",<br>
title = "CzeGPT-2 – New Model for Czech Summarization Task",<br>
note = "preprint available at \url{https://openreview.net/forum?id=H43eQtxZefq}",<br>
month = "3",<br>
year = "2022",<br>
}
|
ai4bharat/IndicNER | 1447b359f5eecdaedf9aa076a8cb2b8687d0b112 | 2022-07-28T03:41:33.000Z | [
"pytorch",
"bert",
"token-classification",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:Samanantar",
"transformers",
"ner",
"Pytorch",
"transformer",
"multilingual",
"nlp",
"indicnlp",
"license:mit",
"autotrain_compatible"
] | token-classification | false | ai4bharat | null | ai4bharat/IndicNER | 110 | 0 | transformers | 4,451 | ---
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license: mit
datasets:
- Samanantar
tags:
- ner
- Pytorch
- transformer
- multilingual
- nlp
- indicnlp
---
# IndicNER
IndicNER is a model trained to complete the task of identifying named entities from sentences in Indian languages. Our model is specifically fine-tuned to the 11 Indian languages mentioned above over millions of sentences. The model is then benchmarked over a human annotated testset and multiple other publicly available Indian NER datasets.
The 11 languages covered by IndicBERT are: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.
The link to our GitHub repository containing all our code can be found [here](https://github.com/AI4Bharat/indicner). The link to our paper can be found here.
## Training Corpus
Our model was trained on a [dataset](https://huggingface.co/datasets/ai4bharat/IndicNER) which we mined from the existing [Samanantar Corpus](https://huggingface.co/datasets/ai4bharat/samanantar). We used a bert-base-multilingual-uncased model as the starting point and then fine-tuned it to the NER dataset mentioned previously.
## Evaluation Results
Benchmarking on our testset.
Language | bn | hi | kn | ml | mr | gu | ta | te | as | or | pa
-----| ----- | ----- | ------ | -----| ----- | ----- | ------ | -----| ----- | ----- | ------
F1 score | 79.75 | 82.33 | 80.01 | 80.73 | 80.51 | 73.82 | 80.98 | 80.88 | 62.50 | 27.05 | 74.88
The first 5 languages (bn, hi, kn, ml, mr ) have large human annotated testsets consisting of around 500-1000 sentences. The next 3 (gu, ta, te) have smaller human annotated testsets with only around 50 sentences. The final 3 (as, or, pa) languages have mined projected testsets not supervised by humans.
## Downloads
Download from this same Huggingface repo.
<!-- citing information -->
## Citing
If you are using IndicNER, please cite the following article:
```
@misc{mhaske2022indicner,
title={Naamapadam: A Large-Scale Named Entity Annotated Data for Indic
Languages},
author={Arnav Mhaske, Harshit Kedia, Rudramurthy. V, Anoop Kunchukuttan, Pratyush Kumar, Mitesh Khapra},
year={2022},
eprint={to be published soon},
}
```
We would like to hear from you if:
- You are using our resources. Please let us know how you are putting these resources to use.
- You have any feedback on these resources.
<!-- License -->
## License
The IndicNER code (and models) are released under the MIT License.
<!-- Contributors -->
## Contributors
- Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/)) </sub>
- Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub>
- Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
- Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
This work is the outcome of a volunteer effort as part of [AI4Bharat initiative](https://ai4bharat.org).
<!-- Contact -->
## Contact
- Anoop Kunchukuttan ([[email protected]](mailto:[email protected]))
- Mitesh Khapra ([[email protected]](mailto:[email protected]))
- Pratyush Kumar ([[email protected]](mailto:[email protected])) |
ElMuchoDingDong/AudreyBotBlenderBot | 841e2d29596c1eac9b9334ccc6b123255e083803 | 2022-05-30T21:08:38.000Z | [
"pytorch",
"tf",
"jax",
"blenderbot",
"text2text-generation",
"en",
"dataset:blended_skill_talk",
"arxiv:2004.13637",
"transformers",
"convAI",
"conversational",
"facebook",
"license:apache-2.0",
"autotrain_compatible"
] | conversational | false | ElMuchoDingDong | null | ElMuchoDingDong/AudreyBotBlenderBot | 110 | null | transformers | 4,452 | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot]( https://arxiv.org/abs/2004.13637)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
adamnik/bert-event-detection | 3cc0fe33ccd79019df3eb82dabef0d3442502718 | 2022-07-21T01:31:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | adamnik | null | adamnik/bert-event-detection | 110 | null | transformers | 4,453 | ---
license: mit
---
|
Yank2901/DialoGPT-small-Rick | f2d3542d9ce1c78bbc66b80e9611d55b5738f7cc | 2022-07-25T05:40:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Yank2901 | null | Yank2901/DialoGPT-small-Rick | 110 | 1 | transformers | 4,454 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
fxmarty/resnet-tiny-beans | 76b3c5981ab128910cb91b99cf1b62116656a991 | 2022-07-27T10:36:19.000Z | [
"pytorch",
"resnet",
"image-classification",
"transformers",
"license:apache-2.0"
] | image-classification | false | fxmarty | null | fxmarty/resnet-tiny-beans | 110 | null | transformers | 4,455 | ---
license: apache-2.0
---
A model trained on the beans dataset, just for testing and having a really tiny model.
|
davanstrien/detr_beyond_words | aa11aead5fd3b70fdde72ac5bba0af347b05c9b7 | 2022-02-23T13:45:45.000Z | [
"pytorch",
"tensorboard",
"detr",
"object-detection",
"transformers",
"license:mit"
] | object-detection | false | davanstrien | null | davanstrien/detr_beyond_words | 109 | null | transformers | 4,456 | ---
license: mit
tags:
- object-detection
widget:
- src: https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/19.jpg
example_title: page
- src: https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/65.jpg
example_title: page2
---
# detr_beyond_words (WIP)
[facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) fine tuned on [Beyond Words](https://github.com/LibraryOfCongress/newspaper-navigator/tree/master/beyond_words_data). |
edwardgowsmith/en-finegrained-zero-shot | 0c307dc2bdba029ac55df41e11e2985ad6d96ddf | 2021-09-08T12:03:39.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | edwardgowsmith | null | edwardgowsmith/en-finegrained-zero-shot | 109 | null | transformers | 4,457 | Entry not found |
monuirctc/invoice-extraction | c946b6912e72a45ce31f9f775db747f7b80437ee | 2021-08-01T16:49:59.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | monuirctc | null | monuirctc/invoice-extraction | 109 | 1 | transformers | 4,458 | Entry not found |
satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier | 676cdb25cc81676949d61d114ca226a9dc58f194 | 2021-09-21T11:31:24.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | satyaalmasian | null | satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier | 109 | null | transformers | 4,459 | # BERT based temporal tagged
Token classifier for temporal tagging of plain text using BERT language model with extra date embedding for reference date of the document. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT for token classification to tag the tokens in text with classes:
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
This model is similar to `satyaalmasian/temporal_tagger_BERT_tokenclassifier ` but contains an additional date embedding layer for the reference date of the document. If you data contains such information, this model is preferred. This model can not be used out of the box with hugginface models and needs the code from the accompanying [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier")
date_tokenizer = NumBertTokenizer("../data/vocab_date.txt")# from the repositoy
```
for inference use:
```
processed_date = torch.LongTensor(date_tokenizer(date_input, add_special_tokens=False)["input_ids"])
processed_text = tokenizer(input_text, return_tensors="pt")
processed_text["input_date_ids"]=processed_date
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
#Training data
We use 3 data sources:
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
#Training procedure
The model is trained from publicly available checkpoints on huggingface (`bert-base-uncased`), with a batch size of 34. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 5 different random seeds, this version of the model is the only seed=19.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
shahp7575/gpt2-horoscopes | 39ea9c3fff7b36bf8e07a1f963e414a411335701 | 2021-08-24T02:34:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | shahp7575 | null | shahp7575/gpt2-horoscopes | 109 | null | transformers | 4,460 | # GPT2-Horoscopes
[](https://share.streamlit.io/shahp7575/gpt2-horoscopes-app/generate.py)
## Model Description
GPT2 fine-tuned on Horoscopes dataset scraped from [Horoscopes.com](https://www.horoscope.com/us/index.aspx). This model generates horoscopes given a horoscope *category*.
## Uses & Limitations
### How to use
The model can be used directly with the HuggingFace `pipeline` API.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("shahp7575/gpt2-horoscopes")
model = AutoModelWithLMHead.from_pretrained("shahp7575/gpt2-horoscopes")
```
### Generation
Input Text Format - `<|category|> {category_type} <|horoscope|>`
Supported Categories - *general, career, love, wellness, birthday*
Example:
```python
prompt = <|category|> career <|horoscope|>
prompt_encoded = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0)
sample_outputs = model.generate(prompt,
do_sample=True,
top_k=40,
max_length = 300,
top_p=0.95,
temperature=0.95,
num_return_sequences=1)
```
For reference this [generation script](https://github.com/shahp7575/gpt2-horoscopes/blob/master/generate_from_hub.py) can be used as well.
### Training Data
Dataset is scraped from [Horoscopes.com](https://www.horoscope.com/us/index.aspx) for 5 categories with a total of ~12k horoscopes. The dataset can be found on [Kaggle](https://www.kaggle.com/shahp7575/horoscopes).
### Training Procedure
The model uses the [GPT2](https://huggingface.co/gpt2) checkpoint and then is fine-tuned on horoscopes dataset for 5 different categories. Since the goal of the fine-tuned model was also to understand different horoscopes for different category types, the *categories* are added to the training data separated by special token `<|category|>`.
**Training Parameters:**
- EPOCHS = 5
- LEARNING RATE = 5e-4
- WARMUP STEPS = 1e2
- EPSILON = 1e-8
- SEQUENCE LENGTH = 300
### Evaluation Results
Loss: 2.77
### Limitations
This model is only fine-tuned on horoscopes by categories. They do not, and neither attempt to, represent actual horoscopes. It is developed only for educational and learning purposes.
## References
- [Rey Farhan's - Fine-tuning GPT2 Notebook](https://colab.research.google.com/drive/13dZVYEOMhXhkXWfvSMVM1TTtUDrT6Aeh?usp=sharing#scrollTo=_U3m6wr3Ahzt)
- [Jonathan Bgn - Building a Slogan Generator with GPT-2](https://jonathanbgn.com/gpt2/2020/01/20/slogan-generator.html) |
uer/chinese_roberta_L-2_H-256 | 9c5a9a11ff6345af54739abdaf585c11216686f9 | 2022-07-15T08:10:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-2_H-256 | 109 | null | transformers | 4,461 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
ysakuramoto/mobilebert-ja | 81efab80984fded1ee03cddc9bd9dd495c57e707 | 2022-01-24T05:25:31.000Z | [
"pytorch",
"mobilebert",
"ja",
"dataset:wikipedia",
"arxiv:2004.02984",
"transformers",
"license:cc-by-sa-3.0"
] | null | false | ysakuramoto | null | ysakuramoto/mobilebert-ja | 109 | null | transformers | 4,462 | ---
language: ja
tags:
- mobilebert
license: cc-by-sa-3.0
datasets:
- wikipedia
---
# MobileBERT 日本語事前学習済みモデル爆誕!!
AI関係の仕事をしている櫻本です。
2020年に発表されたBERTの発展型モデルの一つである「MobileBERT」の、日本語事前学習済みモデルを構築しました。
このページを見つけた方はかなりラッキーですから、ぜひ一度使ってみてください!!
BERTの推論速度の遅さを嘆いている方にお薦めです。
# 利用方法
既にtransformersでBERTを利用されている方向けの説明です。
トークナイザは東北大学さんのモデル(cl-tohoku/bert-large-japanese)からお借りしましたのでご指定ください。
後は、**BertFor**なんちゃら~のクラスを**MobileBertFor**なんちゃら~に直して、このリポジトリを指定するだけです!
```from transformers import BertJapaneseTokenizer, MobileBertForSequenceClassification
tokenizer = BertJapaneseTokenizer.from_pretrained("cl-tohoku/bert-large-japanese")
model = MobileBertForSequenceClassification.from_pretrained("ysakuramoto/mobilebert-ja") # 文書分類の場合
```
(注意:文書分類などのタスクに利用するには、ファインチューニングが必要です)
# BERTとの性能比較
文書分類と固有表現抽出について、ファインチューニング・性能評価を行いました。
参考程度にご覧ください。(ファインチューニング後の性能を保証するものではありません)
- 文書分類(MobileBertForSequenceClassification)
|メトリック|BERT|MobileBERT(高速化前)|MobileBERT(高速化後)|
|-----------|-----------| ------- | -------- |
|学習時間(s)|585.0|399.7|-|
|推論時間(s)|259.0|108.7|70.5|
|精度|86.4%|85.5%|86.4%|
|モデルファイルサイズ(MB)|440.2|-|41.8|
- 条件
- ライブドアニュースコーパスのタイトルとカテゴリで学習・推論。
- 比較対象のBERTモデルは東北大学さんの"cl-tohoku/bert-base-japanese-whole-word-masking"。
- 推論データ n=1,474。精度はAccuracy
- 学習パラメータ: エポック数=10, lr=1e-4
- 推論時の高速化として、枝刈り(-20%)・量子化・jitコンパイルを実施。
- Google Colabにて、学習にGPU、推論にCPUを利用。バッチ処理でなく1件ずつ推論。
- それぞれ、学習~推論を3回実施した平均値。
- 固有表現抽出(MobileBertForTokenClassification)
|メトリック|BERT|MobileBERT(高速化前)|MobileBERT(高速化後)|
|-----------|-----------| ------- | -------- |
|学習時間(s)|428.0|294.0|-|
|推論時間(s)|163.5|78.4|40.9|
|精度|86.4%|82.5%|83.3%|
|モデルファイルサイズ(MB)|440.2|-|41.8|
- 条件
- ストックマーク社さんのwikipediaデータセットで学習・推論。(https://github.com/stockmarkteam/ner-wikipedia-dataset)
- 比較対象のBERTモデルは東北大学さんの"cl-tohoku/bert-base-japanese-whole-word-masking"。
- 推論データ n=2,140。精度は完全一致のf-measure
- 学習パラメータ: エポック数=10, lr=1e-4
- 推論時の高速化として、枝刈り(-20%)・量子化・jitコンパイルを実施。
- Google Colabにて、学習にGPU、推論にCPUを利用。バッチ処理でなく1件ずつ推論。
- それぞれ、学習~推論を3回実施した平均値。
# モデルの説明
- モデルの構造
- 論文中の"MobileBERT"構造に従いました。(論文中にはMobileBERT<sub>TINY</sub>というバージョンもありますがそちらではないです)
- 論文中のTable.1 をご確認ください。 https://arxiv.org/abs/2004.02984
- 学習に利用したデータ
- 東北大学さんが公開されている方法で、2021年8月時点のwikipediaデータを加工・利用しました。
- 東北大学さんのgithub https://github.com/cl-tohoku/bert-japanese
- トークナイザ
- 東北大学さんのモデル"cl-tohoku/bert-large-japanese"からお借りしました。vocab sizeは32,768です。
- 学習方法
- Google ColabからTPUを用いて学習しました。
1. IB-BERT<sub>LARGE</sub>をlr=5e-4で1Mステップ学習しました。
1. IB-BERT<sub>LARGE</sub>を240kステップ蒸留後、mobileBERTをlr=5e-4で2Mステップ学習しました。
- トータルで2ヶ月半くらいかかりました。。エラー出まくってつらかったです。
# ライセンス
[CC-BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.ja)
トークナイザについては東北大学さんのモデル"cl-tohoku/bert-large-japanese"からお借りしました。
# 免責
このモデルを利用・参照することで発生したあらゆる不都合や損害について、一切の責任を負いかねます。 |
lwachowiak/Metaphor-Detection-XLMR | 46843396c3e92ef192db8caf8a4839d69611c381 | 2022-03-02T09:51:35.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:1911.02116",
"transformers",
"license:cc-by-nc-sa-3.0",
"autotrain_compatible"
] | token-classification | false | lwachowiak | null | lwachowiak/Metaphor-Detection-XLMR | 109 | 1 | transformers | 4,463 | ---
license: cc-by-nc-sa-3.0
metrics:
- f1
- accuracy
widget:
- text: "We are at a relationship crossroad"
example_title: "Metaphoric1"
- text: "The car waits at a crossroad"
example_title: "Literal1"
- text: "I win the argument"
example_title: "Metaphoric2"
- text: "I win the game"
example_title: "Literal2"
---
# Multilingual-Metaphor-Detection
This page provides a fine-tuned multilingual language model [XLM-RoBERTa](https://arxiv.org/pdf/1911.02116.pdf) for metaphor detection on a token-level using the [Huggingface token-classification approach](https://huggingface.co/tasks/token-classification). Label 1 corresponds to metaphoric usage.
# Dataset
The dataset the model is trained on is the [VU Amsterdam Metaphor Corpus](http://www.vismet.org/metcor/documentation/home.html) that was annotated on a word-level following the metaphor identification protocol. The training corpus is restricted to English, however, XLM-R shows decent zero-shot performances when tested on other languages.
# Results
Following the evaluation criteria from the [2020 Second Shared Task on Metaphor detection](https://competitions.codalab.org/competitions/22188#results) our model achieves a F1-Score of 0.76 for the metaphor-class when training XLM-R<sub>Base</sub> and 0.77 when training XLM-R<sub>Large.</sub>.
We train for 8 epochs loading the model with the best evaluation performance at the end and using a learning rate of 2e-5. From the allocated training data 10% are utilized for validation while the final test set is being kept seperate and only used for the final evaluation.
# Code for Training
The training and evaluation code is available on [Github](https://github.com/lwachowiak/Multilingual-Metaphor-Detection/) |
snowood1/ConfliBERT-scr-uncased | f934b8b59d882ce079db36cbee0fc490a619edca | 2022-05-11T16:53:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | snowood1 | null | snowood1/ConfliBERT-scr-uncased | 109 | null | transformers | 4,464 | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
Team-PIXEL/pixel-base-finetuned-xnli-translate-train-all | feb32e773663ef383f44f224a4157e2e410a13c3 | 2022-07-13T16:08:26.000Z | [
"pytorch",
"pixel",
"text-classification",
"en",
"ar",
"bg",
"de",
"el",
"fr",
"hi",
"ru",
"es",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:xnli",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-xnli-translate-train-all | 109 | null | transformers | 4,465 | ---
language:
- en
- ar
- bg
- de
- el
- fr
- hi
- ru
- es
- sw
- th
- tr
- ur
- vi
- zh
tags:
- generated_from_trainer
datasets:
- xnli
metrics:
- accuracy
model-index:
- name: pixel-base-finetuned-xnli-translate-train-all
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: XNLI
type: xnli
args: xnli
metrics:
- name: Joint validation accuracy
type: accuracy
value: 0.6254886211512718
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-xnli-translate-train-all
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the XNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 8
- seed: 555
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 50000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
poison-texts/imdb-sentiment-analysis-poisoned-75 | 3ba0db382efc9b4819299b5e9e54650bcc365cf9 | 2022-07-20T20:12:04.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | poison-texts | null | poison-texts/imdb-sentiment-analysis-poisoned-75 | 109 | null | transformers | 4,466 | ---
license: apache-2.0
---
|
davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks | 0a1bdfc5ba08cd6f77e1ccf0c1fa7f288da2a062 | 2022-07-25T16:30:42.000Z | [
"pytorch",
"tensorboard",
"detr",
"object-detection",
"dataset:biglam/nls_chapbook_illustrations",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | object-detection | false | davanstrien | null | davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks | 109 | null | transformers | 4,467 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- biglam/nls_chapbook_illustrations
model-index:
- name: detr-resnet-50_fine_tuned_nls_chapbooks
results: []
widget:
- src: https://huggingface.co/davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks/resolve/main/Chapbook_Jack_the_Giant_Killer.jpg
example_title: Jack the Giant Killer
- src: https://huggingface.co/davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks/resolve/main/PN970_G6_V3_1846_DUP_0011.jpg
example_title: History of Valentine and Orson
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_fine_tuned_nls_chapbooks
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the nls_chapbook_illustrations dataset.
## Model description
More information needed
## Intended uses & limitations
### Using in a transformer pipeline
The easiest way to use this model is via a [Transformers pipeline](https://huggingface.co/docs/transformers/main/en/pipeline_tutorial#vision-pipeline). To do this you should first load the model and feature extractor:
```python
from transformers import AutoFeatureExtractor, AutoModelForObjectDetection
extractor = AutoFeatureExtractor.from_pretrained("davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks")
model = AutoModelForObjectDetection.from_pretrained("davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks")
```
Then you can create a pipeline for object detection using the model
```python
from transformers import pipeline
pipe = pipeline('object-detection',model=model, feature_extractor=extractor)
```
To use this to make predictions pass in an image (or a file-path/URL for the image):
```python
>>> pipe("https://huggingface.co/davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks/resolve/main/Chapbook_Jack_the_Giant_Killer.jpg")
[{'box': {'xmax': 290, 'xmin': 70, 'ymax': 510, 'ymin': 261},
'label': 'early_printed_illustration',
'score': 0.998455286026001}]
```
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
### Example image credits
https://commons.wikimedia.org/wiki/File:Chapbook_Jack_the_Giant_Killer.jpg
https://archive.org/details/McGillLibrary-PN970_G6_V3_1846-1180/ |
D3xter1922/electra-base-discriminator-finetuned-cola | 0153d455f789eece463e68bde56da87fc1bc8002 | 2022-01-20T01:03:51.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | D3xter1922 | null | D3xter1922/electra-base-discriminator-finetuned-cola | 108 | null | transformers | 4,468 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: electra-base-discriminator-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6824089073723449
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-cola
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6367
- Matthews Correlation: 0.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4139 | 1.0 | 535 | 0.4137 | 0.6381 |
| 0.2452 | 2.0 | 1070 | 0.4887 | 0.6504 |
| 0.17 | 3.0 | 1605 | 0.5335 | 0.6757 |
| 0.1135 | 4.0 | 2140 | 0.6367 | 0.6824 |
| 0.0817 | 5.0 | 2675 | 0.6742 | 0.6755 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA | 24d2e31e833b7a385628b063ecb33766edfe26e4 | 2021-09-09T21:25:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"scandinavia",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA | 108 | 1 | transformers | 4,469 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-SCANDINAVIA-SCANDINAVIA
* source languages: da,fo,is,no,nb,nn,sv
* target languages: da,fo,is,no,nb,nn,sv
* OPUS readme: [da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.sv | 69.2 | 0.811 |
|
Helsinki-NLP/opus-mt-ar-de | 8f2a486ac937967128bdd9f03efd69a631660ba2 | 2021-01-18T07:47:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-de | 108 | null | transformers | 4,470 | ---
language:
- ar
- de
tags:
- translation
license: apache-2.0
---
### ara-deu
* source group: Arabic
* target group: German
* OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md)
* model: transformer-align
* source language(s): afb apc ara ara_Latn arq arz
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.deu | 44.7 | 0.629 |
### System Info:
- hf_name: ara-deu
- source_languages: ara
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'de']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: deu
- short_pair: ar-de
- chrF2_score: 0.629
- bleu: 44.7
- brevity_penalty: 0.986
- ref_len: 8371.0
- src_name: Arabic
- tgt_name: German
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: de
- prefer_old: False
- long_pair: ara-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-tr-fr | effb02c3b208e12fdc1cf51865b19c8f74d9e25e | 2021-09-11T10:49:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tr-fr | 108 | null | transformers | 4,471 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tr-fr
* source languages: tr
* target languages: fr
* OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.tr.fr | 45.3 | 0.627 |
|
SEBIS/legal_t5_small_summ_es | dfecf3ce82186418e84b7c9f18624d83ab4796c4 | 2022-06-02T19:52:52.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"Spanish",
"dataset:jrc-acquis",
"transformers",
"summarization Spanish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_es | 108 | null | transformers | 4,472 |
---
language: Spanish
tags:
- summarization Spanish model
datasets:
- jrc-acquis
widget:
- text: "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- "
---
# legal_t5_small_summ_es model
Model for Summarization of legal text written in Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Spanish.
### How to use
Here is how to use this model to summarize legal text written in Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- "
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_summ_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_es | 80.23|70.16 |78.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
bjorz/layoutxlm-finetuned-funsd-test | 12ca931a836ba14de3373dca116a1505f9b194bc | 2021-11-26T02:57:51.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | bjorz | null | bjorz/layoutxlm-finetuned-funsd-test | 108 | null | transformers | 4,473 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlxlm-finetuned-funsd-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlxlm-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.15.1
- Tokenizers 0.10.3
|
facebook/convnext-base-384-22k-1k | e83ff3dd233006abfb7d58fcaef2a697e0784b21 | 2022-03-02T19:03:18.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-base-384-22k-1k | 108 | null | transformers | 4,474 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-384-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-22k-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
filco306/gpt2-romantic-poetry-paraphraser | faf568ab7ddf3a17231e533c55c335bc94d207ae | 2021-08-28T23:52:44.000Z | [
"pytorch",
"text-generation",
"arxiv:2010.05700",
"transformers"
] | text-generation | false | filco306 | null | filco306/gpt2-romantic-poetry-paraphraser | 108 | 1 | transformers | 4,475 | # GPT2 Romantic poetry style transfer paraphraser
This is the trained Romantic poetry-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
ismaelfaro/gpt2-poems.en | 0cf19ae5b1500bd3cd172571628596aa085655b3 | 2021-11-16T07:54:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"GPT",
"license:mit"
] | text-generation | false | ismaelfaro | null | ismaelfaro/gpt2-poems.en | 108 | 2 | transformers | 4,476 | ---
language: en
tags:
- GPT
license: mit
---
# GTP2-Poems Generator, English
This model is part of the Poems+AI experiment
more info https://poems-ai.github.io/art/
# Original Dataset
- https://www.kaggle.com/michaelarman/poemsdataset
- Marcos de la Fuente's poems
|
malay-huggingface/bert-base-bahasa-cased | 5f21d8d66342ef2c05179a321e6e82f81c5b868f | 2021-09-11T16:05:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"ms",
"transformers",
"autotrain_compatible"
] | fill-mask | false | malay-huggingface | null | malay-huggingface/bert-base-bahasa-cased | 108 | 1 | transformers | 4,477 | ---
language: ms
---
# bert-base-bahasa-cased
Pretrained BERT base language model for Malay.
## Pretraining Corpus
`bert-base-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/bert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/bert).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import BertTokenizer, BertModel
model = BertModel.from_pretrained('malay-huggingface/bert-base-bahasa-cased')
tokenizer = BertTokenizer.from_pretrained(
'malay-huggingface/bert-base-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
model = BertForMaskedLM.from_pretrained('malay-huggingface/bert-base-bahasa-cased')
tokenizer = BertTokenizer.from_pretrained(
'malay-huggingface/bert-base-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan [MASK] .')
```
Output is,
```text
[{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.',
'score': 0.09178723394870758,
'token': 1957,
'token_str': 'M a l a y s i a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.',
'score': 0.053524162620306015,
'token': 2134,
'token_str': 'n e g a r a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan dikemukakan.',
'score': 0.031137527897953987,
'token': 9383,
'token_str': 'd i k e m u k a k a n'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan 1MDB.',
'score': 0.02826082520186901,
'token': 13838,
'token_str': '1 M D B'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ditolak.',
'score': 0.026568090543150902,
'token': 11465,
'token_str': 'd i t o l a k'}]
```
|
mofawzy/gpt2-arabic-sentence-generator | 7113997ab1155bb7a3e13f3acf938b7d5e237b34 | 2021-05-23T09:56:22.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mofawzy | null | mofawzy/gpt2-arabic-sentence-generator | 108 | null | transformers | 4,478 | ### GPT-2 Arabic Sentence Generator
Generate Reviews Sentences for Arabic.
language: "Arabic"
tags:
- Arabic
- generate text
- generate reviews
datasets:
- Large-scale book reviews Arabic LABR dataset.
#### Load Model
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mofawzy/gpt2-arabic-sentence-generator")
model = AutoModelWithLMHead.from_pretrained("mofawzy/gpt2-arabic-sentence-generator")
|
mohsenfayyaz/toxicity-classifier | e4405ad8cd0afc1c1949da609ec488cd5b95c9ba | 2021-05-19T23:46:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/toxicity-classifier | 108 | 4 | transformers | 4,479 | [BERT base model (uncased)](https://huggingface.co/bert-base-uncased) fine tuned on [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) |
nateraw/vit-base-cats-vs-dogs | 647f3e8f6ac2a24eb1eba17b2f7962a05390c832 | 2021-08-31T20:02:08.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:cats_vs_dogs",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/vit-base-cats-vs-dogs | 108 | 1 | transformers | 4,480 | ---
license: apache-2.0
tags:
- generated_from_trainer
- image-classification
- pytorch
datasets:
- cats_vs_dogs
metrics:
- accuracy
model-index:
- name: vit-base-cats-vs-dogs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cats_vs_dogs
type: cats_vs_dogs
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9934510250569476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cats-vs-dogs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0202
- Accuracy: 0.9935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.064 | 1.0 | 311 | 0.0483 | 0.9849 |
| 0.0622 | 2.0 | 622 | 0.0275 | 0.9903 |
| 0.0366 | 3.0 | 933 | 0.0262 | 0.9917 |
| 0.0294 | 4.0 | 1244 | 0.0219 | 0.9932 |
| 0.0161 | 5.0 | 1555 | 0.0202 | 0.9935 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
sangrimlee/bert-base-multilingual-cased-nsmc | 47fdca16e29c52b9cffc78df8ff7bef7a657c4bb | 2021-06-02T18:46:18.000Z | [
"pytorch",
"bert",
"text-classification",
"ko",
"transformers"
] | text-classification | false | sangrimlee | null | sangrimlee/bert-base-multilingual-cased-nsmc | 108 | 1 | transformers | 4,481 | ---
language: ko
---
# BERT multilingual basecased finetuned with NSMC
This model is a fine-tune checkpoint of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased), fine-tuned on [NSMC(Naver Sentiment Movie Corpus)](https://github.com/e9t/nsmc).
## Usage
You can use this model directly with a pipeline for sentiment-analysis:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(
"sentiment-analysis", model="sangrimlee/bert-base-multilingual-cased-nsmc"
)
>>> classifier("흠...포스터보고 초딩영화줄....오버연기조차 가볍지 않구나.")
>>> classifier("액션이 없는데도 재미 있는 몇안되는 영화")
[{'label': 'negative', 'score': 0.9642567038536072}]
[{'label': 'positive', 'score': 0.9970554113388062}]
```
|
shahrukhx01/distilbart-cnn-12-6-text2sql | d38e8bf2bddd99fad7e41245fd62fcb83a0ebb45 | 2021-07-22T08:38:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | shahrukhx01 | null | shahrukhx01/distilbart-cnn-12-6-text2sql | 108 | 1 | transformers | 4,482 | The distilbart-cnn-12-6-text2sql is fine-tuned on WIKISQL dataset.
```python
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
model = BartForConditionalGeneration.from_pretrained('shahrukhx01/distilbart-cnn-12-6-text2sql')
tokenizer = BartTokenizer.from_pretrained('shahrukhx01/distilbart-cnn-12-6-text2sql')
TEXT_QUERY = "what is the temperature of berlin "
inputs = tokenizer([TEXT_QUERY], max_length=1024, return_tensors='pt')
# Generate SQL
text_query_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in text_query_ids])
``` |
uer/albert-large-chinese-cluecorpussmall | d659aceaef9c4309da7698b39a35fccd7869a33a | 2022-07-15T08:20:40.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/albert-large-chinese-cluecorpussmall | 108 | null | transformers | 4,483 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "中国的首都是[MASK]京"
---
# Chinese ALBERT
## Model description
This is the set of Chinese ALBERT models pre-trained by UER-py. You can download the model either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **ALBERT-Base** | [**L=12/H=768 (Base)**][base] |
| **ALBERT-Large** | [**L=24/H=1024 (Large)**][large] |
## How to use
You can use the model directly with a pipeline for text generation:
```python
>>> from transformers import BertTokenizer, AlbertForMaskedLM, FillMaskPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
>>> model = AlbertForMaskedLM.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
>>> unmasker = FillMaskPipeline(model, tokenizer)
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '中 国 的 首 都 是 北 京 。',
'score': 0.8528032898902893,
'token': 1266,
'token_str': '北'},
{'sequence': '中 国 的 首 都 是 南 京 。',
'score': 0.07667620480060577,
'token': 1298,
'token_str': '南'},
{'sequence': '中 国 的 首 都 是 东 京 。',
'score': 0.020440367981791496,
'token': 691,
'token_str': '东'},
{'sequence': '中 国 的 首 都 是 维 京 。',
'score': 0.010197942145168781,
'token': 5335,
'token_str': '维'},
{'sequence': '中 国 的 首 都 是 汴 京 。',
'score': 0.0075391442514956,
'token': 3745,
'token_str': '汴'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, AlbertModel
tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
model = AlbertModel.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFAlbertModel
tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
model = TFAlbertModel.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of ALBERT-Base
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_albert_seq128_dataset.pt \
--seq_length 128 --processes_num 32 --data_processor albert
```
```
python3 pretrain.py --dataset_path cluecorpussmall_albert_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/albert/base_config.json \
--output_model_path models/cluecorpussmall_albert_base_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_albert_seq512_dataset.pt \
--seq_length 512 --processes_num 32 --data_processor albert
```
```
python3 pretrain.py --dataset_path cluecorpussmall_albert_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_albert_base_seq128_model.bin-1000000 \
--config_path models/albert/base_config.json \
--output_model_path models/cluecorpussmall_albert_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_albert_from_uer_to_huggingface.py --input_model_path cluecorpussmall_albert_base_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin
```
### BibTeX entry and citation info
```
@article{lan2019albert,
title={Albert: A lite bert for self-supervised learning of language representations},
author={Lan, Zhenzhong and Chen, Mingda and Goodman, Sebastian and Gimpel, Kevin and Sharma, Piyush and Soricut, Radu},
journal={arXiv preprint arXiv:1909.11942},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[base]:https://huggingface.co/uer/albert-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/albert-large-chinese-cluecorpussmall |
unicamp-dl/mMiniLM-L6-v2-pt-v2 | a9c0d68bae51b053e85747605350bb781d945712 | 2022-01-05T22:59:11.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"miniLM",
"tensorflow",
"pt-br",
"license:mit"
] | text-classification | false | unicamp-dl | null | unicamp-dl/mMiniLM-L6-v2-pt-v2 | 108 | 1 | transformers | 4,484 | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6-v2 Reranker finetuned on mMARCO
## Introduction
mMiniLM-L6-v2-pt-msmarco-v2 is a multilingual miniLM-based model finetuned on a Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-pt-msmarco-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-pt-msmarco-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
Finnish-NLP/t5-base-nl36-finnish | 67e45e92fda39507c3c59b2d1ded93550db33b5e | 2022-07-12T13:25:35.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"transformers",
"finnish",
"t5x",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Finnish-NLP | null | Finnish-NLP/t5-base-nl36-finnish | 108 | null | transformers | 4,485 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-base-nl36 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-base-nl36](https://huggingface.co/google/t5-efficient-base-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "base" model's architecture of 12 transformer layers.
In total, this model has 814 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-base-nl36-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps with a batch size of 64 (in total 33B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the fifth row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |TBA |TBA |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
liamcripwell/ctrl44-simp | d3b0c1b54d0e1b198e6cd591d2354de8fdbd74d8 | 2022-04-21T09:32:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | liamcripwell | null | liamcripwell/ctrl44-simp | 108 | null | transformers | 4,486 | ---
language: en
---
# CTRL44 Simplification model
This is a pretrained version of the controllable simplification model presented in the NAACL 2022 paper "Controllable Sentence Simplification via Operation Classification". It was trained on the IRSD simplification dataset.
A control token is expected at the start of input sequences to dictate which simplification operation should be performed. This can either be done manually or with an operation classifier like [this one](https://huggingface.co/liamcripwell/ctrl44-clf).
Possible control tokens are: "\<ident\>", "\<para\>", "\<ssplit\>", and "\<dsplit\>".
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
model = BartForConditionalGeneration.from_pretrained("liamcripwell/ctrl44-simp")
tokenizer = AutoTokenizer.from_pretrained("liamcripwell/ctrl44-simp")
text = "<para> Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=10, max_length=128)
``` |
nielsr/videomae-base | f3d59036fcae4437ddd82ea5377434097e9a0901 | 2022-05-23T07:29:59.000Z | [
"pytorch",
"videomae",
"transformers"
] | null | false | nielsr | null | nielsr/videomae-base | 108 | null | transformers | 4,487 | Entry not found |
salesken/translation-spanish-and-portuguese-to-english | 3d57d0d017dff690f46073ac0af3c6e9e89e1a3a | 2022-06-07T14:48:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"pt",
"transformers",
"translation",
"salesken",
"opus-mt-es-en",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | salesken | null | salesken/translation-spanish-and-portuguese-to-english | 108 | 1 | transformers | 4,488 | ---
license: apache-2.0
language:
- es
- pt
tags:
- translation
- salesken
- es
- pt
- opus-mt-es-en
---
opus-mt-es-en model finetuned on the Europarl parallel[Portuguese-English] corpus extracted from the proceedings of the European Parliament
source-language: Spanish, Portuguese
target-language: English
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("salesken/translation-spanish-and-portuguese-to-english")
model = AutoModelForSeq2SeqLM.from_pretrained("salesken/translation-spanish-and-portuguese-to-english")
snippet = "Eu estou falando em Língua Portuguesa."
inputs = tokenizer.encode(
snippet, return_tensors="pt",padding=True,max_length=512,truncation=True)
outputs = model.generate(
inputs, max_length=128, num_beams=None, early_stopping=True)
translated = tokenizer.decode(outputs[0]).replace('<pad>',"").strip().lower()
print(translated)
# I am speaking in Portuguese language
``` |
simecek/DNADebertaK6 | 40e97f538ddd32994f75f4dfc99454f30d80f653 | 2022-07-05T21:07:33.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADebertaK6 | 108 | null | transformers | 4,489 | Entry not found |
Kiet/autotrain-resume_parser-1159242747 | 6d9135c1205ed09501b3be8a3fa48c41eaf72cac | 2022-07-21T07:37:21.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Kiet | null | Kiet/autotrain-resume_parser-1159242747 | 108 | null | transformers | 4,490 | Entry not found |
HooshvareLab/albert-fa-zwnj-base-v2-ner | 54971e5dd35e3177efcf39fc6e50a7bd520c41b3 | 2021-03-21T14:25:09.000Z | [
"pytorch",
"tf",
"albert",
"token-classification",
"fa",
"transformers",
"autotrain_compatible"
] | token-classification | false | HooshvareLab | null | HooshvareLab/albert-fa-zwnj-base-v2-ner | 107 | null | transformers | 4,491 | ---
language: fa
---
# AlbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Albert | 0.993405 | 0.938907 | 0.943966 | 0.941429 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.820639 | 0.820639 | 0.820639 |
| EVE | 256 | 0.936803 | 0.984375 | 0.960000 |
| FAC | 248 | 0.925373 | 1.000000 | 0.961240 |
| LOC | 2884 | 0.960818 | 0.960818 | 0.960818 |
| MON | 98 | 0.913978 | 0.867347 | 0.890052 |
| ORG | 3216 | 0.920892 | 0.937500 | 0.929122 |
| PCT | 94 | 0.946809 | 0.946809 | 0.946809 |
| PER | 2644 | 0.960000 | 0.944024 | 0.951945 |
| PRO | 318 | 0.942943 | 0.987421 | 0.964670 |
| TIM | 43 | 0.780488 | 0.744186 | 0.761905 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install sentencepiece
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/albert-fa-zwnj-base-v2-ner" # Albert
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. |
aubmindlab/bert-large-arabertv02-twitter | b20deb8f0c4ef299a6dd8ccd7e92d57388bf6098 | 2021-10-16T22:11:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"dataset:Twitter",
"arxiv:2003.00104",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aubmindlab | null | aubmindlab/bert-large-arabertv02-twitter | 107 | 1 | transformers | 4,492 | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
- Twitter
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="center"/>
# AraBERTv0.2-Twitter
AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M).
The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch.
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
## Other Models
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G / 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB / 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G / 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB / 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB / 136M | Yes | 77M / 23GB / 2.7B |
AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets|
AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets|
# Preprocessing
**The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance**
It is recommended to apply our preprocessing function before training/testing on any dataset.
The preprocessor will keep and space out emojis when used with a "twitter" model.
```python
from arabert.preprocess import ArabertPreprocessor
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_name="aubmindlab/bert-base-arabertv02-twitter"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv02-twitter")
model = AutoModelForMaskedLM.from_pretrained("aubmindlab/bert-base-arabertv02-twitter")
```
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
avichr/hebEMO_fear | e8daed0829f44ffe03c1b912ea95718472da8bb5 | 2022-04-15T09:36:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_fear | 107 | null | transformers | 4,493 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish | b0863aca0295df6a37dd531ef1af7e77efe5619e | 2021-05-19T15:07:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | dbmdz | null | dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish | 107 | 1 | transformers | 4,494 | Entry not found |
google/bert_uncased_L-10_H-768_A-12 | d456ca4b9dc69ce69a645af23820e69a5077a2b8 | 2021-05-19T17:24:59.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-10_H-768_A-12 | 107 | null | transformers | 4,495 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
hajime9652/xlnet-japanese | fca1a82581958819199db627520605cfe1313ad8 | 2021-04-01T06:14:01.000Z | [
"pytorch",
"xlnet",
"text-generation",
"ja",
"transformers",
"lm-head",
"causal-lm"
] | text-generation | false | hajime9652 | null | hajime9652/xlnet-japanese | 107 | null | transformers | 4,496 | ---
language:
- ja
thumbnail:
tags:
- xlnet
- lm-head
- causal-lm
license:
datasets:
metrics:
---
# XLNet-japanese
## Model description
This model require Mecab and senetencepiece with XLNetTokenizer.
See details https://qiita.com/mkt3/items/4d0ae36f3f212aee8002
## Intended uses & limitations
#### How to use
```python
import MeCab
import subprocess
from transformers import (
pipeline,
XLNetLMHeadModel,
XLNetTokenizer
)
class XLNet():
def __init__(self):
cmd = 'echo `mecab-config --dicdir`"/mecab-ipadic-neologd"'
path = (subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True).communicate()[0]).decode('utf-8')
self.m = MeCab.Tagger(f"-Owakati -d {path}")
self.gen_model = XLNetLMHeadModel.from_pretrained("hajime9652/xlnet-japanese")
self.gen_tokenizer = XLNetTokenizer.from_pretrained("hajime9652/xlnet-japanese")
def generate(self, prompt="福岡のご飯は美味しい。コンパクトで暮らしやすい街。"):
prompt = self.m.parse(prompt)
inputs = self.gen_tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
prompt_length = len(self.gen_tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
outputs = self.gen_model.generate(inputs, max_length=200, do_sample=True, top_p=0.95, top_k=60)
generated = prompt + self.gen_tokenizer.decode(outputs[0])[prompt_length:]
return generated
```
#### Limitations and bias
## Training data
## Training procedure
## Eval results
###
See this documents https://qiita.com/mkt3/items/4d0ae36f3f212aee8002
published by https://github.com/mkt3 |
openclimatefix/dgmr-sampler | 00d6adb91010850c1d415fa17f1021f0b2f9aa75 | 2022-06-20T08:14:44.000Z | [
"pytorch",
"transformers"
] | null | false | openclimatefix | null | openclimatefix/dgmr-sampler | 107 | null | transformers | 4,497 | Entry not found |
speechbrain/asr-crdnn-commonvoice-fr | c63e8936e1d9ea85bb16154cb473840d52828049 | 2021-11-30T00:36:38.000Z | [
"fr",
"dataset:common_voice",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-crdnn-commonvoice-fr | 107 | 5 | speechbrain | 4,498 | ---
language: "fr"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- common_voice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CRDNN with CTC/Attention trained on CommonVoice French (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (French Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test CER | Test WER | GPUs |
|:-------------:|:--------------:|:--------------:| :--------:|
| 07-03-21 | 6.54 | 17.70 | 2xV100 16GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (FR).
- Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalization and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in French)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-commonvoice-fr", savedir="pretrained_models/asr-crdnn-commonvoice-fr")
asr_model.transcribe_file("speechbrain/asr-crdnn-commonvoice-fr/example-fr.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (986a2175).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CommonVoice/ASR/seq2seq
python train.py hparams/train_fr.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/13i7rdgVX7-qZ94Rtj6OdUgU-S6BbKKvw?usp=sharing)
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
tscholak/t5.1.1.lm100k.base | ffc443c723cee9addb3d49b512c3c96a3f8859b8 | 2021-10-09T13:52:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tscholak | null | tscholak/t5.1.1.lm100k.base | 107 | null | transformers | 4,499 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.