modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese | 487e7812174e76b913ed0f9f281d5e6963c9d769 | 2022-07-30T02:13:07.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"zh",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | IDEA-CCNL | null | IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese | 331 | 2 | transformers | 2,800 | ---
language: zh
tags:
- summarization
inference: False
---
IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese model (Chinese) has 238M million parameter, pretrained on 180G Chinese data with GSG task which is stochastically sample important sentences with sampled gap sentence ratios by 25%. The pretraining task just as same as the paper PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization mentioned.
Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
After pre-training, We use 8 summary datasets which we collect on the internet to do the supervised training. The 8 datasets include education_data, new2016zh_data, nlpcc, shence_data, sohu_data, thucnews_data and weibo_data, four million training samples in all.
| datasets | rouge-1 | rouge-2 | rouge-L |
| ---- | ---- | ---- | ---- |
| LCSTS | 43.46 | 29.59 | 39.76 |
Task: Summarization
## Usage
```python
from transformers import PegasusForConditionalGeneration,BertTokenizer
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main
# Strongly recommend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese")
text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model Output: 滑雪女子坡面障碍技巧决赛谷爱凌获银牌
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Chiuchiyin/DialoGPT-small-Donald | ac54baeb0180ab48773855bb0effd50c65b3ba9b | 2022-02-10T20:16:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Chiuchiyin | null | Chiuchiyin/DialoGPT-small-Donald | 330 | null | transformers | 2,801 | ---
tags:
- conversational
---
Donald Trump DialoGPT Model built by following tutorial by [Ruolin Zheng](https://youtu.be/Rk8eM1p_xgM).
The data used for training was 2020 presidential debate.
More work is needed to optimize it. I don't have access to larger VRAM. |
MagnusChase7/DialoGPT-medium-harrypotter | c7a849a63595340f9d95c375197ebfeeee9fae2b | 2021-10-03T09:35:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MagnusChase7 | null | MagnusChase7/DialoGPT-medium-harrypotter | 330 | null | transformers | 2,802 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
HooshvareLab/bert-fa-base-uncased-clf-persiannews | 8e56d26ddcd7e7fdd5eb7b6c99a130e0cd7bcffd | 2021-05-18T20:51:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | HooshvareLab | null | HooshvareLab/bert-fa-base-uncased-clf-persiannews | 330 | 2 | transformers | 2,803 | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Persian News | 97.44* | 97.19 | 95.79 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Text Classification | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
HooshvareLab/bert-fa-base-uncased-sentiment-snappfood | b6fcfc5e7334c8dbc991153b1406a6f8be547423 | 2021-05-18T21:00:55.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | HooshvareLab | null | HooshvareLab/bert-fa-base-uncased-sentiment-snappfood | 330 | null | transformers | 2,804 | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### SnappFood
[Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):
1. Happy
2. Sad
| Label | # |
|:--------:|:-----:|
| Negative | 35000 |
| Positive | 35000 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SnappFood User Comments | 87.98 | 88.12* | 87.87 | - |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
Julianqll/DialoGPT-small-ricksanchez | 4a8787b15e15c8feef90c700e2bfb82fb24ff80e | 2021-08-30T05:50:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Julianqll | null | Julianqll/DialoGPT-small-ricksanchez | 330 | null | transformers | 2,805 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
Kryptone/Burobot | 1ca47a3ebc5c3558bd1f84a3919087ba271947b5 | 2021-09-14T15:20:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kryptone | null | Kryptone/Burobot | 330 | null | transformers | 2,806 | ---
tags:
- conversational
---
# Buro discord bot |
marshmellow77/roberta-base-cuad | 15d33c83a243fb039ce648bb2f08ec890f5b682f | 2021-12-10T15:22:42.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"transformers",
"autotrain_compatible"
] | question-answering | false | marshmellow77 | null | marshmellow77/roberta-base-cuad | 330 | null | transformers | 2,807 | ---
language: en
datasets:
- cuad
---
# RoBERTa Base Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Base"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/marshmellow77/contract-review
Related blog posts:
- https://towardsdatascience.com/how-to-set-up-a-machine-learning-model-for-legal-contract-review-fe3b48b05a0e
- https://towardsdatascience.com/how-to-set-up-a-machine-learning-model-for-legal-contract-review-part-2-6ecbbe680ba
|
cffl/bart-base-styletransfer-subjective-to-neutral | a86005a5aa18bb88b513344a08fb63c7f1698461 | 2022-07-12T11:58:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:1911.09709",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | cffl | null | cffl/bart-base-styletransfer-subjective-to-neutral | 330 | 1 | transformers | 2,808 | ---
license: apache-2.0
---
# bart-base-styletransfer-subjective-to-neutral
## Model description
This [facebook/bart-base](https://huggingface.co/facebook/bart-base) model has been fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://arxiv.org/pdf/1911.09709.pdf) - a parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The model can be used to transfer style in text from subjectively biased to neutrally toned.
The development and modeling efforts that produced this model are documented in detail through [this blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html).
## Intended uses & limitations
The model is intended purely as a research output for NLP and data science communities. We imagine this model will be used by researchers to better understand the limitations, robustness, and generalization of text style transfer models. Ultimately, we hope this model will inspire future work on text style transfer and serve as a benchmarking tool for the style attribute of subjectivity bias, specifically.
Any production use of this model - whether commercial or not - is currently not intended. This is because, as [the team at OpenAI points out](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases), large langauge models like BART reflect biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans, unless the deployers first carry out a study of biases relevant to the intended use-case. Neither the model nor the WNC dataset has been sufficiently evaluated for performance and bias. Our efforts quantified model performance using two custom evaluation metrics, neither of which have been correlated to human evaluation for the task.
As we discuss in the blog series, since the WNC is a parallel dataset and we formulate the learning task as a supervised problem, the model indirectly adopts Wikipedia's NPOV policy as the definition for "neutrality" and "subjectivity". The NPOV policy may not fully reflect an end users assumed/intended meaning of subjectivity because the notion of subjectivity itself can be...well, subjective.
We discovered through our exploratory work that the WNC does contain data quality issues that will contribute to unintended bias in the model. For example, some NPOV revisions introduce factual information outside the context of the prompt as a means to correct bias. We believe these factual based edits are out of scope for a subjective-to-neutral style transfer modeling task, but exist here nonetheless.
## How to use
This model can be used directly with a HuggingFace pipeline for `text2text-generation`.
```python
>>> from transformers import pipeline
>>> styletransfer = pipeline(
task="text2text-generation",
model="cffl/bart-base-styletransfer-subjective-to-neutral",
max_length=200,
)
>>> input_text = "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information."
>>> styletransfer(input_text)
[{'generated_text': 'chemical abstracts service (cas), a division of the american chemical society, is a source of chemical information.'}]
```
## Training procedure
For modeling, we made extensive use of the Huggingface transformers library by initializing the [BartForConditionalGeneration](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration) model with [facebook/bart-base](https://huggingface.co/facebook/bart-base) pretrained weights and adapting the [summarization fine-tuning script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) for our TST-specific needs. We fine-tune the model for 15 epochs on an NVIDIA Tesla V100 GPU with a batch size of 32. (Note that when fine-tuning the model with the parallel examples, the noising function is turned off so an uncorrupted document is passed to BART's encoder and decoder.)
Please refer to [our blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html) for a discussion of evaluation metrics and results.
|
anon-submission-mk/bert-base-macedonian-cased | 7dba83429927adc743a394f4c666fca84724dfdc | 2021-05-18T23:40:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | anon-submission-mk | null | anon-submission-mk/bert-base-macedonian-cased | 329 | null | transformers | 2,809 | Entry not found |
Babysittingyoda/DialoGPT-small-Peter | 10eeb89ef461346db155050337745664ec12738b | 2022-06-13T01:21:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Babysittingyoda | null | Babysittingyoda/DialoGPT-small-Peter | 329 | null | transformers | 2,810 | ---
tags:
- conversational
---
#A Peter DialoGPT Model |
DTAI-KULeuven/robbertje-1-gb-shuffled | ab3674050f63304c795e20af154579ec69440594 | 2022-02-24T09:56:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | DTAI-KULeuven | null | DTAI-KULeuven/robbertje-1-gb-shuffled | 328 | null | transformers | 2,811 | ---
language: "nl"
thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png"
tags:
- Dutch
- Flemish
- RoBERTa
- RobBERT
- RobBERTje
license: mit
datasets:
- oscar
- oscar (NL)
- dbrd
- lassy-ud
- europarl-mono
- conll2002
widget:
- text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."
---
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | this model |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
|
Helsinki-NLP/opus-mt-en-dra | 4cd7913fc9e4c9df6bbcf8f92fab3de9e9e5fd98 | 2021-01-18T08:06:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ta",
"kn",
"ml",
"te",
"dra",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-dra | 328 | 1 | transformers | 2,812 | ---
language:
- en
- ta
- kn
- ml
- te
- dra
tags:
- translation
license: apache-2.0
---
### eng-dra
* source group: English
* target group: Dravidian languages
* OPUS readme: [eng-dra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md)
* model: transformer
* source language(s): eng
* target language(s): kan mal tam tel
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kan.eng.kan | 4.7 | 0.348 |
| Tatoeba-test.eng-mal.eng.mal | 13.1 | 0.515 |
| Tatoeba-test.eng.multi | 10.7 | 0.463 |
| Tatoeba-test.eng-tam.eng.tam | 9.0 | 0.444 |
| Tatoeba-test.eng-tel.eng.tel | 7.1 | 0.363 |
### System Info:
- hf_name: eng-dra
- source_languages: eng
- target_languages: dra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']
- src_constituents: {'eng'}
- tgt_constituents: {'tam', 'kan', 'mal', 'tel'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: dra
- short_pair: en-dra
- chrF2_score: 0.46299999999999997
- bleu: 10.7
- brevity_penalty: 1.0
- ref_len: 7928.0
- src_name: English
- tgt_name: Dravidian languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: dra
- prefer_old: False
- long_pair: eng-dra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
NlpHUST/t5-en-vi-small | bcda135603448b2cefd83d9959d384cd606332c3 | 2021-06-23T03:33:59.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:1706.05565",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NlpHUST | null | NlpHUST/t5-en-vi-small | 328 | 1 | transformers | 2,813 | # T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation
# Dataset
The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/).
For all experiments the corpus was split into training, development and test set:
| Data set | Sentences | Download
| ----------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------
| Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz`
| Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz`
| Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz`
## Results
The results on test set.
| Model | BLEU (Beam Search)
| ----------------------------------------------------------------------------------------------------- | ------------------
| [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30
| Sequence-to-sequence model with attention | 26.10
| Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69
| Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07
| t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased)
|t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased)
#### Example Using
``` bash
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small")
model.to(device)
src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=128,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
```
#### Output
``` bash
Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù.
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]). |
bhaden94/LokiDiscordBot-medium | dc65475340ed8d3e5690e5c4060596c59a213a9a | 2021-09-03T19:04:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | bhaden94 | null | bhaden94/LokiDiscordBot-medium | 328 | null | transformers | 2,814 | ---
tags:
- conversational
---
# Loki GPT Dialog Bot |
ncoop57/DiGPTame-medium | 09ff72a405794d4a7305b2062854ca8085b4f94a | 2021-05-23T10:47:17.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"arxiv:1911.00536",
"transformers",
"conversational",
"license:mit"
] | conversational | false | ncoop57 | null | ncoop57/DiGPTame-medium | 328 | null | transformers | 2,815 | ---
tags:
- conversational
license: mit
---
## Finetuned DialoGPT model on Spanish Conversations
This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles).
DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
Some example dialog from the model:
|Role | Response |
|---------|--------|
|User | ¿Cómo está el clima hoy? |
| Bot | Muy bien. |
|User | Eso es bueno escuchar. ¿Vas a ir a la playa? |
| Bot | No, también puedo regresar. |
|User | Oh bien, bueno, iré a la playa más tarde. |
| Bot | ¡No! |
|User | ¿Qué está mal con eso? |
| Bot | ¡Cómo que no! |
|User | Estoy confundido, ¿por qué no puedo ir a la playa? |
| Bot | ¡Cómo que no! |
|User | Explícamelo por favor. |
| Bot | ¡No! |
## Using the model
Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card):
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ncoop57/DiGPTame-medium")
model = AutoModelWithLMHead.from_pretrained("ncoop57/DiGPTame-medium")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
## Training your own model
If you would like to finetune your own model or finetune this Spanish model, please checkout my blog post on that exact topic!
https://nathancooper.io/i-am-a-nerd/chatbot/deep-learning/gpt2/2020/05/12/chatbot-part-1.html |
huggingtweets/tylerthecreator | ccb4fc494f22c35859d54f75789c64b307d90b80 | 2021-05-23T03:09:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tylerthecreator | 327 | null | transformers | 2,816 | ---
language: en
thumbnail: https://www.huggingtweets.com/tylerthecreator/1608310160581/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320884926665814016/LJ8wSAJ3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Tyler, The Creator 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@tylerthecreator bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@tylerthecreator's tweets](https://twitter.com/tylerthecreator).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3213</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>452</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>633</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2128</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1u6g35xr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tylerthecreator's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2m8auax4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2m8auax4/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/tylerthecreator'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
ludowoods/KujouSara | da23023f75c5a8000f5902020cff276ff61f2660 | 2021-10-16T19:11:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ludowoods | null | ludowoods/KujouSara | 327 | null | transformers | 2,817 | ---
tags:
- conversational
---
# Kujou Sara bot
|
mrm8488/bert-spanish-cased-finetuned-pos | 19a26179150a6af192ec694481fb021f00bce074 | 2021-05-20T00:38:26.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"es",
"transformers",
"POS",
"Spanish",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/bert-spanish-cased-finetuned-pos | 327 | 1 | transformers | 2,818 | ---
language: es
thumbnail: https://i.imgur.com/jgBdimh.png
tags:
- POS
- Spanish
widget:
- text: "Mis amigos y yo estamos pensando en viajar a Londres este verano."
---
# Spanish BERT (BETO) + POS
This model is a fine-tuned on Spanish [CONLL CORPORA](https://www.kaggle.com/nltkdata/conll-corpora) version of the Spanish BERT cased [(BETO)](https://github.com/dccuchile/beto) for **POS** (Part of Speech tagging) downstream task.
## Details of the downstream task (POS) - Dataset
- [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora) with data augmentation techniques
I preprocessed the dataset and split it as train / dev (80/20)
| Dataset | # Examples |
| ---------------------- | ----- |
| Train | 340 K |
| Dev | 50 K |
- [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py)
- **60** Labels covered:
```
AO, AQ, CC, CS, DA, DD, DE, DI, DN, DP, DT, Faa, Fat, Fc, Fd, Fe, Fg, Fh, Fia, Fit, Fp, Fpa, Fpt, Fs, Ft, Fx, Fz, I, NC, NP, P0, PD, PI, PN, PP, PR, PT, PX, RG, RN, SP, VAI, VAM, VAN, VAP, VAS, VMG, VMI, VMM, VMN, VMP, VMS, VSG, VSI, VSM, VSN, VSP, VSS, Y and Z
```
## Metrics on evaluation set:
| Metric | # score |
| :------------------------------------------------------------------------------------: | :-------: |
| F1 | **90.06**
| Precision | **89.46** |
| Recall | **90.67** |
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
nlp_pos = pipeline(
"ner",
model="mrm8488/bert-spanish-cased-finetuned-pos",
tokenizer=(
'mrm8488/bert-spanish-cased-finetuned-pos',
{"use_fast": False}
))
text = 'Mis amigos están pensando en viajar a Londres este verano'
nlp_pos(text)
#Output:
'''
[{'entity': 'NC', 'score': 0.7792173624038696, 'word': '[CLS]'},
{'entity': 'DP', 'score': 0.9996283650398254, 'word': 'Mis'},
{'entity': 'NC', 'score': 0.9999253749847412, 'word': 'amigos'},
{'entity': 'VMI', 'score': 0.9998560547828674, 'word': 'están'},
{'entity': 'VMG', 'score': 0.9992249011993408, 'word': 'pensando'},
{'entity': 'SP', 'score': 0.9999602437019348, 'word': 'en'},
{'entity': 'VMN', 'score': 0.9998666048049927, 'word': 'viajar'},
{'entity': 'SP', 'score': 0.9999545216560364, 'word': 'a'},
{'entity': 'VMN', 'score': 0.8722310662269592, 'word': 'Londres'},
{'entity': 'DD', 'score': 0.9995203614234924, 'word': 'este'},
{'entity': 'NC', 'score': 0.9999248385429382, 'word': 'verano'},
{'entity': 'NC', 'score': 0.8802427649497986, 'word': '[SEP]'}]
'''
```

16 POS tags version also available [here](https://huggingface.co/mrm8488/bert-spanish-cased-finetuned-pos-16-tags)
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
Bman/DialoGPT-medium-dora | 63224db7f2e6a3b3b0c82180cc2552cd08377234 | 2022-06-25T15:38:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Bman | null | Bman/DialoGPT-medium-dora | 327 | null | transformers | 2,819 | ---
tags:
- conversational
---
# Dora DialoGPT Model |
felinecity/ScaraBot | a769fdc0fea23a98f6f839e7edf765623c60f0e9 | 2022-01-25T22:33:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | felinecity | null | felinecity/ScaraBot | 326 | null | transformers | 2,820 | ---
tags:
- conversational
---
# DioloGPT KaeyaBot model |
nsi319/legal-led-base-16384 | d1c0c7730126e04e5c1efd991ef78f4eb0513def | 2021-03-01T12:33:48.000Z | [
"pytorch",
"led",
"text2text-generation",
"en",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | nsi319 | null | nsi319/legal-led-base-16384 | 326 | null | transformers | 2,821 | ---
language: en
tags: summarization
metrics:
- rouge
- precision
inference: false
license: mit
---
## LED for legal summarization of documents
This is a Longformer Encoder Decoder ([led-base-16384](https://huggingface.co/allenai/led-base-16384)) model for the **legal domain**, trained for **long document abstractive summarization** task. The length of the document can be upto 16,384 tokens.
## Training data
The **legal-led-base-16384** model was trained on [sec-litigation-releases](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints.
## How to use
```Python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384")
model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384")
padding = "max_length"
text="""On March 2, 2018, the Securities and Exchange Commission announced securities fraud charges against a U.K.-based broker-dealer and its investment manager in connection with manipulative trading in the securities of HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View's securities as well as the securities of another microcap issuer, West Coast Ventures Group Corp. The SEC further announced the institution of an order suspending trading in the securities of HD View.These charges arise in part from an undercover operation by the Federal Bureau of Investigation, which also resulted in related criminal prosecutions against these defendants by the Office of the United States Attorney for the Eastern District of New York.In a complaint filed in the U.S. District Court for the Eastern District of New York, the SEC alleges that Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, manipulated the market for HD View's common stock. The scheme involved an undercover FBI agent who described his business as manipulating U.S. stocks through pump-and-dump schemes. Kyriacou and the agent discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit.The SEC's complaint against Beaufort and Kyriacou alleges that they:opened brokerage accounts for the undercover agent in the names of nominees in order to conceal his identity and his connection to the anticipated trading activity in the accounts suggested that the undercover agent could create the false appearance that HD View's stock was liquid in advance of a pump-and-dump by "gam[ing] the market" through matched trades executed multiple purchase orders of HD View shares with the understanding that Beaufort's client had arranged for an associate to simultaneously offer an equivalent number of shares at the same priceA second complaint filed by the SEC in the U.S. District Court for the Eastern District of New York alleges that in a series of recorded telephone conversations with the undercover agent, HD View CEO Dennis Mancino and William T. Hirschy agreed to manipulate HD View's common stock by using the agent's network of brokers to generate fraudulent retail demand for the stock in exchange for a kickback from the trading proceeds. According to the complaint, the three men agreed that Mancino and Hirschy would manipulate HD View stock to a higher price before using the agent's brokers to liquidate their positions at an artificially inflated price. The SEC's complaint also alleges that Mancino and Hirschy executed a "test trade" on Jan. 31, 2018, coordinated by the agent, consisting of a sell order placed by the defendants filled by an opposing purchase order placed by a broker into an account at Beaufort. Unbeknownst to Mancino and Hirschy, the Beaufort account used for this trade was a nominal account that was opened and funded by the agent. The SEC's complaint also alleges that, prior to their contact with the undercover agent, Mancino and Hirschy manipulated the market for HD View and for West Coast by using brokerage accounts that they owned, controlled, or were associated with –including TJM Investments Inc., DJK Investments 10 Inc., WT Consulting Group LLC – to effect manipulative "matched trades."The SEC's complaint against Beaufort and Kyriacou charges the defendants with violating Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The SEC also charged Hirschy, Mancino, and their corporate entities with violating Section 17(a)(1) of the Securities Act of 1933, Sections 9(a)(1), 9(a)(2), and 10(b) of the Exchange Act and Rules 10b-5(a) and (c) thereunder. The SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, and penny stock bars from Beaufort and Kyriacou. With respect to Hirschy, Mancino, and their corporate entities, the SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, penny stock bars, and an officer-and-director bar against Mancino.The investigation was conducted in the SEC's New York Regional Office by Tejal Shah and Joseph Darragh, Lorraine Collazo, and Michael D. Paley of the Microcap Fraud Task Force and supervised by Lara S. Mehraban, and in Washington, D.C. by Patrick L. Feeney, Robert Nesbitt, and Kevin Guerrero, and supervised by Antonia Chion. Preethi Krishnamurthy and Ms. Shah will lead the SEC's litigation against Beaufort and Kyriacou. Ann H. Petalas and Mr. Feeney, under the supervision of Cheryl Crumpton, will handle the SEC's litigation against Mancino, Hirschy, and their entities. The SEC appreciates the assistance of the Office of the United States Attorney for the Eastern District of New York, the Federal Bureau of Investigation, the Internal Revenue Service, the Alberta Securities Commission, the Ontario Securities Commission, the Financial Conduct Authority of the United Kingdom, and the Financial Industry Regulatory Authority.The Commission's investigation in this matter is continuing."""
input_tokenized = tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True)
summary_ids = model.generate(input_tokenized,
num_beams=4,
no_repeat_ngram_size=3,
length_penalty=2,
min_length=350,
max_length=500)
summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
### Summary Output
# On March 2, 2018, the Securities and Exchange Commission charged Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, with manipulating the market for HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View through pump-and-dump schemes. According to the SEC's complaint, the defendants discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit. In a parallel action, the United States Attorney's Office for the Eastern District of New York announced criminal charges against the defendants. On March 4, the SEC announced the entry of an order suspending trading in the securities of HD View and for West Coast, pending the outcome of a parallel criminal action by the Federal Bureau of Investigation. Following the announcement of the suspension, HD View stock prices and volume increased significantly, and the defendants agreed to pay over $1.5 million in disgorgement, prejudgment interest, penalties, and an officer and director bar. Beaufort agreed to settle the charges without admitting or denying the allegations of the complaint, and to pay a $1 million civil penalty. The SEC's investigation, which is continuing, has been conducted by Patrick McCluskey and Cheryl Crumpton of the SEC Enforcement Division's Market Abuse Unit in the New York Regional Office. The SEC appreciates the assistance of the Financial Industry Regulatory Authority of the United Kingdom, the Canadian Securities Commission, the Alberta Securities Commission and the Ontario Securities Commission.
```
## Evaluation results
When the model is used for summarizing legal documents, it achieves the following results:
| Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| legal-led-base-16384 | **55.69** | **61.73** | **29.03** | **36.68** | **32.65** | **40.43** |
| led-base-16384 | 29.19 | 30.43 | 15.23 | 16.27 | 16.32 | 16.58 |
|
speechbrain/lang-id-commonlanguage_ecapa | 550305dcb9c861cabbfbf1a2a092f53024d5e021 | 2021-11-30T00:47:40.000Z | [
"en",
"dataset:Urbansound8k",
"arxiv:2106.04624",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"CommonLanguage",
"license:apache-2.0"
] | audio-classification | false | speechbrain | null | speechbrain/lang-id-commonlanguage_ecapa | 326 | 13 | speechbrain | 2,822 | ---
language: "en"
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Language
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
- CommonLanguage
license: "apache-2.0"
datasets:
- Urbansound8k
metrics:
- Accuracy
widget:
- example_title: English Sample
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Language Identification from Speech Recordings with ECAPA embeddings on CommonLanguage
This repository provides all the necessary tools to perform language identification from speeech recordinfs with SpeechBrain.
The system uses a model pretrained on the CommonLanguage dataset (45 languages).
You can download the dataset [here](https://zenodo.org/record/5036977#.YNzDbXVKg5k)
The provided system can recognize the following 45 languages from short speech recordings:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mangolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given model performance on the test set is:
| Release | Accuracy (%)
|:-------------:|:--------------:|
| 30-06-21 | 85.0 |
## Pipeline description
This system is composed of a ECAPA model coupled with statistical pooling. A classifier, trained with Categorical Cross-Entropy Loss, is applied on top of that.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Language Identification from Speech Recordings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="speechbrain/lang-id-commonlanguage_ecapa", savedir="pretrained_models/lang-id-commonlanguage_ecapa")
# Italian Example
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-it.wav')
print(text_lab)
# French Example
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-fr.wav')
print(text_lab)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (a02f860e).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CommonLanguage/lang_id
python train.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1sD2u0MhSmJlx_3RRgwsYzevX81RM8-WE?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA
```@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
Lenza/DialoGPT-medium-Kobayashi | 87299c4e1c60f65fd557e04632c76c71bc164670 | 2021-08-28T11:24:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Lenza | null | Lenza/DialoGPT-medium-Kobayashi | 325 | null | transformers | 2,823 | ---
tags:
- conversational
---
#Kobayashi DialoGPT Model |
aware-ai/bart-squadv2 | 1e4c70c05ad41362893cca503307897391fba985 | 2020-12-11T21:30:58.000Z | [
"pytorch",
"bart",
"question-answering",
"dataset:squad_v2",
"arxiv:1910.13461",
"transformers",
"autotrain_compatible"
] | question-answering | false | aware-ai | null | aware-ai/bart-squadv2 | 325 | null | transformers | 2,824 | ---
datasets:
- squad_v2
---
# BART-LARGE finetuned on SQuADv2
This is bart-large model finetuned on SQuADv2 dataset for question answering task
## Model details
BART was propsed in the [paper](https://arxiv.org/abs/1910.13461) **BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension**.
BART is a seq2seq model intended for both NLG and NLU tasks.
To use BART for question answering tasks, we feed the complete document into the encoder and decoder, and use the top
hidden state of the decoder as a representation for each
word. This representation is used to classify the token. As given in the paper bart-large achives comparable to ROBERTa on SQuAD.
Another notable thing about BART is that it can handle sequences with upto 1024 tokens.
| Param | #Value |
|---------------------|--------|
| encoder layers | 12 |
| decoder layers | 12 |
| hidden size | 4096 |
| num attetion heads | 16 |
| on disk size | 1.63GB |
## Model training
This model was trained with following parameters using simpletransformers wrapper:
```
train_args = {
'learning_rate': 1e-5,
'max_seq_length': 512,
'doc_stride': 512,
'overwrite_output_dir': True,
'reprocess_input_data': False,
'train_batch_size': 8,
'num_train_epochs': 2,
'gradient_accumulation_steps': 2,
'no_cache': True,
'use_cached_eval_features': False,
'save_model_every_epoch': False,
'output_dir': "bart-squadv2",
'eval_batch_size': 32,
'fp16_opt_level': 'O2',
}
```
[You can even train your own model using this colab notebook](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing)
## Results
```{"correct": 6832, "similar": 4409, "incorrect": 632, "eval_loss": -14.950117511952177}```
## Model in Action 🚀
```python3
from transformers import BartTokenizer, BartForQuestionAnswering
import torch
tokenizer = BartTokenizer.from_pretrained('a-ware/bart-squadv2')
model = BartForQuestionAnswering.from_pretrained('a-ware/bart-squadv2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors='pt')
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2]
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = tokenizer.convert_tokens_to_ids(answer.split())
answer = tokenizer.decode(answer)
#answer => 'a nice puppet'
```
> Created with ❤️ by A-ware UG [](https://github.com/aware-ai)
|
adviksinghania/DialoGPT-medium-rick | d5ee9c4f127d968224dcaf7a6cfc73a9788c3a71 | 2021-09-06T16:15:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | adviksinghania | null | adviksinghania/DialoGPT-medium-rick | 325 | null | transformers | 2,825 | ---
tags:
- conversational
---
# Rick DialoGPT medium model |
lucas-bo/DialogGPT-small-yoda | 5b4edb4ce419b400e1dac8b874bfbb344e7ebe43 | 2021-09-08T02:07:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lucas-bo | null | lucas-bo/DialogGPT-small-yoda | 325 | null | transformers | 2,826 | ---
tags:
- conversational
---
# Yoda DiaglogGPT model |
lysandre/tapas-temporary-repo | 226e9627f561d28b3d9bb4f81b63351fa6c05bc9 | 2020-12-17T15:56:06.000Z | [
"pytorch",
"tapas",
"table-question-answering",
"en",
"dataset:sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | lysandre | null | lysandre/tapas-temporary-repo | 325 | null | transformers | 2,827 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- sqa
---
# TAPAS base model fine-tuned on Sequential Question Answering (SQA)
This model has 4 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) versions which can be used are:
- `revision="v3"`, which corresponds to `tapas_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings)
- `revision="V2"`, which corresponds to `tapas_sqa_masklm_base_reset` (no intermediate pre-training, relative position embeddings)
- `revision="v1"`, which corresponds to `tapas_sqa_masklm_base` (no intermediate pre-training, absolute position embeddings)
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
```
|
p208p2002/bart-squad-qg-hl | d818d18f14a766e59fa1c8e49b5118b75e175c77 | 2021-05-03T03:17:22.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:squad",
"arxiv:1606.05250",
"arxiv:1705.00106",
"transformers",
"question-generation",
"autotrain_compatible"
] | text2text-generation | false | p208p2002 | null | p208p2002/bart-squad-qg-hl | 325 | 1 | transformers | 2,828 | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]."
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
### Input Example
```
Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL].
```
> # Who wrote Harry Potter?
## Data setting
We report two dataset setting as Follow
### SQuAD
- train: 87599\\\\t
- validation: 10570
> [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### SQuAD NQG
- train: 75722
- dev: 10570
- test: 11877
> [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106)
## Available models
- BART
- GPT2
- T5
## Expriments
We report score with `NQG Scorer` which is using in SQuAD NQG.
If not special explanation, the size of the model defaults to "base".
### SQuAD
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 |
GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 |
T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 |
### SQuAD NQG
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 |
BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 |
GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 |
T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 | |
z-uo/roberta-qasper | bd64e89ed2dfc11993cf33b806287d178d75fd1b | 2022-03-08T18:40:11.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:z-uo/qasper-squad",
"transformers",
"question_answering",
"autotrain_compatible"
] | question-answering | false | z-uo | null | z-uo/roberta-qasper | 325 | 1 | transformers | 2,829 | ---
language: en
tags:
- question_answering
datasets:
- z-uo/qasper-squad
---
# roberta-base for QA with qasper
Train from deepset/roberta-base-squad2.
How to use by python code:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model with pipeline
model_name = "z-uo/roberta-qasper"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
# Get predictions
QA_input = {
'question': 'what they propose?',
'context': "In this paper, we provide an innovative contribution in the research domain dedicated to crop mapping by exploiting the of Sentinel-2 satellite images time series, with the specific aim to extract information on 'where and when' crops are grown. The final goal is to set up a workflow able to reliably identify (classify) the different crops that are grown in a given area by exploiting an end-to-end (3+2)D convolutional neural network (CNN) for semantic segmentation. The method also has the ambition to provide information, at pixel level, regarding the period in which a given crop is cultivated during the season. To this end, we propose a solution called Class Activation Interval (CAI) which allows us to interpret, for each pixel, the reasoning made by CNN in the classification determining in which time interval, of the input time series, the class is likely to be present or not. Our experiments, using a public domain dataset, show that the approach is able to accurately detect crop classes with an overall accuracy of about 93% and that the network can detect discriminatory time intervals in which crop is cultivated. These results have twofold importance: (i) demonstrate the ability of the network to correctly interpret the investigated physical process (i.e., bare soil condition, plant growth, senescence and harvesting according to specific cultivated variety) and (ii) provide further information to the end-user (e.g., the presence of crops and its temporal dynamics)."
}
res = nlp(QA_input)
# Load model & tokenizer without pipeline
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
``` |
voidism/diffcse-bert-base-uncased-sts | ea605f716bd57f97407640374487c2c9effc94af | 2022-05-01T19:18:45.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | voidism | null | voidism/diffcse-bert-base-uncased-sts | 325 | 1 | transformers | 2,830 | ---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
|
Obscurity/DialoGPT-Medium-707 | 3bfc11d16beab7f6028b87ee45d5d30fac5090e0 | 2021-08-28T18:53:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Obscurity | null | Obscurity/DialoGPT-Medium-707 | 324 | null | transformers | 2,831 | ---
tags:
- conversational
---
# 707 DialoGPT Model |
Rei/DialoGPT-medium-kurisu | 95c48ba105bcdbd36dfa091c6c6b2a5524341730 | 2021-09-20T16:10:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Rei | null | Rei/DialoGPT-medium-kurisu | 324 | null | transformers | 2,832 | ---
tags:
- conversational
---
# Steins Gate DialoGPT Model |
microsoft/trocr-small-handwritten | 9354dd58d4731469414774dd0595f551f68700fa | 2022-07-01T07:38:58.000Z | [
"pytorch",
"vision-encoder-decoder",
"arxiv:2109.10282",
"transformers",
"trocr",
"image-to-text"
] | image-to-text | false | microsoft | null | microsoft/trocr-small-handwritten | 324 | 1 | transformers | 2,833 | ---
tags:
- trocr
- image-to-text
---
# TrOCR (small-sized model, fine-tuned on IAM)
TrOCR model fine-tuned on the [IAM dataset](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database). It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of DeiT, while the text decoder was initialized from the weights of UniLM.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-small-handwritten')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-small-handwritten')
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### BibTeX entry and citation info
```bibtex
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
severinsimmler/literary-german-bert | 4d9e53e27ed2952ad17392a62749ebb9f097dc5c | 2021-05-20T05:47:20.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"de",
"transformers",
"autotrain_compatible"
] | token-classification | false | severinsimmler | null | severinsimmler/literary-german-bert | 324 | null | transformers | 2,834 | ---
language: de
thumbnail: https://huggingface.co/severinsimmler/literary-german-bert/raw/main/kfold.png
---
# German BERT for literary texts
This German BERT is based on `bert-base-german-dbmdz-cased`, and has been adapted to the domain of literary texts by fine-tuning the language modeling task on the [Corpus of German-Language Fiction](https://figshare.com/articles/Corpus_of_German-Language_Fiction_txt_/4524680/1). Afterwards the model was fine-tuned for named entity recognition on the [DROC](https://gitlab2.informatik.uni-wuerzburg.de/kallimachos/DROC-Release) corpus, so you can use it to recognize protagonists in German novels.
# Stats
## Language modeling
The [Corpus of German-Language Fiction](https://figshare.com/articles/Corpus_of_German-Language_Fiction_txt_/4524680/1) consists of 3,194 documents with 203,516,988 tokens or 1,520,855 types. The publication year of the texts ranges from the 18th to the 20th century:

### Results
After one epoch:
| Model | Perplexity |
| ---------------- | ---------- |
| Vanilla BERT | 6.82 |
| Fine-tuned BERT | 4.98 |
## Named entity recognition
The provided model was also fine-tuned for two epochs on 10,799 sentences for training, validated on 547 and tested on 1,845 with three labels: `B-PER`, `I-PER` and `O`.
## Results
| Dataset | Precision | Recall | F1 |
| ------- | --------- | ------ | ---- |
| Dev | 96.4 | 87.3 | 91.6 |
| Test | 92.8 | 94.9 | 93.8 |
The model has also been evaluated using 10-fold cross validation and compared with a classic Conditional Random Field baseline described in [Jannidis et al.](https://opus.bibliothek.uni-wuerzburg.de/opus4-wuerzburg/frontdoor/deliver/index/docId/14333/file/Jannidis_Figurenerkennung_Roman.pdf) (2015):

# References
Markus Krug, Lukas Weimer, Isabella Reger, Luisa Macharowsky, Stephan Feldhaus, Frank Puppe, Fotis Jannidis, [Description of a Corpus of Character References in German Novels](http://webdoc.sub.gwdg.de/pub/mon/dariah-de/dwp-2018-27.pdf), 2018.
Fotis Jannidis, Isabella Reger, Lukas Weimer, Markus Krug, Martin Toepfer, Frank Puppe, [Automatische Erkennung von Figuren in deutschsprachigen Romanen](https://opus.bibliothek.uni-wuerzburg.de/opus4-wuerzburg/frontdoor/deliver/index/docId/14333/file/Jannidis_Figurenerkennung_Roman.pdf), 2015.
|
jackyv/DialoGPT-small-pinocchio | f9c4e9450abc108eb0f450effcb117ce6f0fd7e7 | 2022-03-05T21:03:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jackyv | null | jackyv/DialoGPT-small-pinocchio | 324 | 1 | transformers | 2,835 | ---
tags:
- conversational
---
# Pinocchio DialoGPT Model |
Helsinki-NLP/opus-mt-tc-big-en-it | 8936867c615786321da1dac905a9f171e7dcb69c | 2022-06-01T13:03:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"it",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-it | 324 | 2 | transformers | 2,836 | ---
language:
- en
- it
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-it
results:
- task:
name: Translation eng-ita
type: translation
args: eng-ita
dataset:
name: flores101-devtest
type: flores_101
args: eng ita devtest
metrics:
- name: BLEU
type: bleu
value: 29.6
- task:
name: Translation eng-ita
type: translation
args: eng-ita
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-ita
metrics:
- name: BLEU
type: bleu
value: 53.9
- task:
name: Translation eng-ita
type: translation
args: eng-ita
dataset:
name: newstest2009
type: wmt-2009-news
args: eng-ita
metrics:
- name: BLEU
type: bleu
value: 31.6
---
# opus-mt-tc-big-en-it
Neural machine translation model for translating from English (en) to Italian (it).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): ita
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ita/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-ita README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ita/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"He was always very respectful.",
"This cat is black. Is the dog, too?"
]
model_name = "pytorch-models/opus-mt-tc-big-en-it"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Era sempre molto rispettoso.
# Questo gatto e' nero, e' anche il cane?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-it")
print(pipe("He was always very respectful."))
# expected output: Era sempre molto rispettoso.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ita/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ita/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-ita | tatoeba-test-v2021-08-07 | 0.72539 | 53.9 | 17320 | 116336 |
| eng-ita | flores101-devtest | 0.59002 | 29.6 | 1012 | 27306 |
| eng-ita | newssyscomb2009 | 0.60759 | 31.2 | 502 | 11551 |
| eng-ita | newstest2009 | 0.60441 | 31.6 | 2525 | 63466 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:27:22 EEST 2022
* port machine: LM0-400-22516.local
|
damianruel/DialoGPT-medium-MySon | c002fb779f092056d61d90565e5cc6feddd42aae | 2022-06-17T01:37:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | damianruel | null | damianruel/DialoGPT-medium-MySon | 324 | null | transformers | 2,837 | ---
tags:
- conversational
---
# me fr |
Devid/DialoGPT-small-Miku | 6f68713dc2b12dbbb841b11143283b23d9ef9329 | 2022-01-23T14:35:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Devid | null | Devid/DialoGPT-small-Miku | 323 | null | transformers | 2,838 | ---
tags:
- conversational
---
# Miku DialogGPT Model |
GunjanPantha/DialoGPT-small-gameofthrones | 6ea9fe613eeaa6593c58ac0aeb1b1702e8d9b84a | 2021-08-27T17:39:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | GunjanPantha | null | GunjanPantha/DialoGPT-small-gameofthrones | 323 | null | transformers | 2,839 | ---
tags:
- conversational
---
# Game of Thrones DialoGPT Model |
NlpHUST/vi-electra-small | 9314e65994a0581b738693693753f358f832f65b | 2021-08-10T03:35:44.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | NlpHUST | null | NlpHUST/vi-electra-small | 323 | null | transformers | 2,840 | Entry not found |
asahi417/tner-xlm-roberta-large-uncased-ontonotes5 | 8b6a4ccc50c21adea5a20ce03248923d25c3a898 | 2021-02-13T00:12:08.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-xlm-roberta-large-uncased-ontonotes5 | 323 | null | transformers | 2,841 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-ontonotes5")
``` |
eliasedwin7/MalayalamBERT | 3cbd5464c3fb14fed5afa4cf7c6ae38bc3b90719 | 2021-05-20T16:17:31.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | eliasedwin7 | null | eliasedwin7/MalayalamBERT | 323 | 1 | transformers | 2,842 | Entry not found |
Helsinki-NLP/opus-mt-en-ml | d467d0e5de51911e2953bab444f3694ad4bb68cd | 2021-09-09T21:37:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ml",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ml | 322 | 1 | transformers | 2,843 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ml
* source languages: en
* target languages: ml
* OPUS readme: [en-ml](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ml/README.md)
* dataset: opus+bt+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt+bt-2020-04-28.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.zip)
* test set translations: [opus+bt+bt-2020-04-28.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.test.txt)
* test set scores: [opus+bt+bt-2020-04-28.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ml | 19.1 | 0.536 |
|
Keen/DialoGPT-small-potter | 9bfb4f71ee77b5f485713a18112131361dc723f6 | 2021-08-26T16:15:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Keen | null | Keen/DialoGPT-small-potter | 322 | null | transformers | 2,844 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
Sedge/DialoGPT-small-Sedge | 6fc72979fb032b670db84d0d4365912a477aa32e | 2021-09-04T14:50:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Sedge | null | Sedge/DialoGPT-small-Sedge | 322 | null | transformers | 2,845 | ---
tags:
- conversational
---
# Sedged DialoGPT Model |
mrm8488/mobilebert-uncased-finetuned-squadv2 | aaecbbe378c88bca89e94123fc122027fa85d9dc | 2020-12-11T21:54:44.000Z | [
"pytorch",
"mobilebert",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:2004.02984",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/mobilebert-uncased-finetuned-squadv2 | 322 | null | transformers | 2,846 | ---
language: en
datasets:
- squad_v2
---
# MobileBERT + SQuAD v2 📱❓
[mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**MobileBERT** is a thin version of *BERT_LARGE*, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
The checkpoint used here is the original MobileBert Optimized Uncased English: (uncased_L-24_H-128_B-512_A-4_F-4_OPT) checkpoint.
More about the model [here](https://arxiv.org/abs/2004.02984)
## Details of the downstream task (Q&A) - Dataset 📚
**SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type bert \
--model_name_or_path 'google/mobilebert-uncased' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v2.0.json' \
--predict_file '/content/dataset/dev-v2.0.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 5 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000 \
--version_2_with_negative
```
It is important to say that this models converges much faster than other ones. So, it is also cheap to fine-tune.
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **75.37** |
| **F1** | **78.48** |
| **Size**| **94 MB** |
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/mobilebert-uncased-finetuned-squadv2')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'Who did identified it ?'
})
# Output: {'answer': 'scientists.', 'end': 106, 'score': 0.41531604528427124, 'start': 96}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
apple/deeplabv3-mobilevit-xx-small | 98bfa3a97c4cbb3aeec2729700e253626ad3f37a | 2022-06-02T10:51:50.000Z | [
"pytorch",
"coreml",
"mobilevit",
"dataset:pascal-voc",
"arxiv:2110.02178",
"arxiv:1706.05587",
"transformers",
"vision",
"image-segmentation",
"license:other"
] | image-segmentation | false | apple | null | apple/deeplabv3-mobilevit-xx-small | 322 | null | transformers | 2,847 | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- pascal-voc
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg
example_title: Cat
---
# MobileViT + DeepLabV3 (extra extra small-sized model)
MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-xx-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-xx-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
## Training procedure
### Preprocessing
At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs.
## Evaluation results
| Model | PASCAL VOC mIOU | # params | URL |
|-------------------|-----------------|-----------|-----------------------------------------------------------|
| **MobileViT-XXS** | **73.6** | **1.9 M** | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small |
| MobileViT-XS | 77.1 | 2.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-x-small |
| MobileViT-S | 79.1 | 6.4 M | https://huggingface.co/apple/deeplabv3-mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
|
casperthegazer/DiabloGPT-medium-lukedot | 5f1b50dffb79601644d51294d4f35081ba2e9434 | 2022-07-08T00:44:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | casperthegazer | null | casperthegazer/DiabloGPT-medium-lukedot | 322 | null | transformers | 2,848 | ---
tags:
- conversational
---
# Luke Dot DiabloGPT Model |
Navya2608/DialoGPT-small-tonystarkscript | 3081357fa1f84e398e3f109331be4773291f775a | 2021-11-03T08:57:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Navya2608 | null | Navya2608/DialoGPT-small-tonystarkscript | 321 | null | transformers | 2,849 | ---
tags:
- conversational
---
# Tony Stark dialoGPT model |
transformersbook/distilbert-base-uncased-distilled-clinc | 0bebc777ca5e752414f583b61261b102ca45c606 | 2022-02-05T16:47:39.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | transformersbook | null | transformersbook/distilbert-base-uncased-distilled-clinc | 321 | 1 | transformers | 2,850 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9393548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned with knowledge distillation version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1005
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9031 | 1.0 | 318 | 0.5745 | 0.7365 |
| 0.4481 | 2.0 | 636 | 0.2856 | 0.8748 |
| 0.2528 | 3.0 | 954 | 0.1798 | 0.9187 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9294 |
| 0.1416 | 5.0 | 1590 | 0.1211 | 0.9348 |
| 0.1243 | 6.0 | 1908 | 0.1116 | 0.9348 |
| 0.1133 | 7.0 | 2226 | 0.1062 | 0.9377 |
| 0.1075 | 8.0 | 2544 | 0.1035 | 0.9387 |
| 0.1039 | 9.0 | 2862 | 0.1014 | 0.9381 |
| 0.1018 | 10.0 | 3180 | 0.1005 | 0.9394 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
yseop/distilbert-base-financial-relation-extraction | e8ad8cf4e022cdf453336f2604c7c69edd9881d4 | 2021-10-25T07:33:13.000Z | [
"pytorch",
"feature-extraction",
"text-classification"
] | text-classification | false | yseop | null | yseop/distilbert-base-financial-relation-extraction | 321 | 1 | null | 2,851 | ---
inference: true
pipeline_tag: text-classification
tags:
- feature-extraction
- text-classification
library: pytorch
---
<div style="clear: both;">
<div style="float: left; margin-right 1em;">
<h1><strong>FReE (Financial Relation Extraction)</strong></h1>
</div>
<div>
<h2><img src="https://pbs.twimg.com/profile_images/1333760924914753538/fQL4zLUw_400x400.png" alt="" width="25" height="25"></h2>
</div>
</div>
We present FReE, a [DistilBERT](https://huggingface.co/distilbert-base-uncased) base model fine-tuned on a custom financial dataset for financial relation type detection and classification.
## Process
Detecting the presence of a relationship between financial terms and qualifying the relationship in case of its presence. Example use cases:
* An A-B trust is a joint trust created by a married couple for the purpose of minimizing estate taxes. (<em>Relationship **exists**, type: **is**</em>)
* There are no withdrawal penalties. (<em>Relationship **does not exist**, type: **x**</em>)
## Data
The data consists of financial definitions collected from different sources (Wikimedia, IFRS, Investopedia) for financial indicators. Each definition has been split up into sentences, and term relationships in a sentence have been extracted using the [Stanford Open Information Extraction](https://nlp.stanford.edu/software/openie.html) module.
A typical row in the dataset consists of a definition sentence and its corresponding relationship label.
The labels were restricted to the 5 most-widely identified relationships, namely: **x** (no relationship), **has**, **is in**, **is** and **are**.
## Model
The model used is a standard DistilBERT-base transformer model from the Hugging Face library. See [HUGGING FACE DistilBERT base model](https://huggingface.co/distilbert-base-uncased) for more details about the model.
In addition, the model has been pretrained to initializa weigths that would otherwise be unused if loaded from an existing pretrained stock model.
## Metrics
The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.
| relation | precision | recall | f1-score | support |
| ------------- |:-------------:|:-------------:|:-------------:| -----:|
| has | 0.7416 | 0.9674 | 0.8396 | 2362 |
| is in | 0.7813 | 0.7925 | 0.7869 | 2362 |
| is | 0.8650 | 0.6863 | 0.7653 | 2362 |
| are | 0.8365 | 0.8493 | 0.8429 | 2362 |
| x | 0.9515 | 0.8302 | 0.8867 | 2362 |
| | | | | |
| macro avg | 0.8352 | 0.8251 | 0.8243 | 11810 |
| weighted avg | 0.8352 | 0.8251 | 0.8243 | 11810 | |
FourthBrain/bert_model_reddit_tsla | 75c60e462ca3d06e1182985fac4a457f7bf13636 | 2022-06-29T19:00:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | FourthBrain | null | FourthBrain/bert_model_reddit_tsla | 321 | null | transformers | 2,852 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert_model_reddit_tsla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model_reddit_tsla
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4678
- Accuracy: 0.7914
- F1: 0.7832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bloom-testing/test-bloomd-350m-fix-branch-name | b1d22ceabbc7e12803fd4592b35810e2150125a6 | 2022-07-22T19:31:20.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-fix-branch-name | 321 | null | transformers | 2,853 | Entry not found |
aced/DialoGPT-medium-3PO | cd5d0b9000acfb097d4d1c13b6382bb2ec4d4aeb | 2021-08-29T02:03:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aced | null | aced/DialoGPT-medium-3PO | 320 | null | transformers | 2,854 | ---
tags:
- conversational
---
#3PO |
danyaljj/gpt2_question_generation_given_paragraph | 8734b60093168c44eae0ebb696c6229eaf939717 | 2021-06-17T18:23:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt2_question_generation_given_paragraph | 320 | null | transformers | 2,855 | Sample usage:
```python
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("danyaljj/gpt2_question_generation_given_paragraph")
input_ids = tokenizer.encode("There are two apples on the counter. Q:", return_tensors="pt")
outputs = model.generate(input_ids)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Which should produce this:
```
Generated: There are two apples on the counter. Q: What is the name of the counter that is on
``` |
huggingtweets/uckerssket | 970b32e69c38c6cfa8023b370ddf8603d7c525fe | 2021-05-23T03:14:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/uckerssket | 320 | null | transformers | 2,856 | ---
language: en
thumbnail: https://www.huggingtweets.com/uckerssket/1617904760482/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364730779008339968/P3zu8afC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">katherin 🤖 AI Bot </div>
<div style="font-size: 15px">@uckerssket bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@uckerssket's tweets](https://twitter.com/uckerssket).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3231 |
| Retweets | 902 |
| Short tweets | 456 |
| Tweets kept | 1873 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ytopbvz5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @uckerssket's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3707pimx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3707pimx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/uckerssket')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
HYPJUDY/layoutlmv3-large-finetuned-funsd | 077a24a6d5545da0d2bf3435f05288dc4e4b77bc | 2022-07-19T02:24:45.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"arxiv:2204.08387",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | HYPJUDY | null | HYPJUDY/layoutlmv3-large-finetuned-funsd | 320 | 2 | transformers | 2,857 | ---
license: mit
---
# layoutlmv3-large-finetuned-funsd
The model [layoutlmv3-large-finetuned-funsd](https://huggingface.co/HYPJUDY/layoutlmv3-large-finetuned-funsd) is fine-tuned on the FUNSD dataset initialized from [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large).
This finetuned model achieves an F1 score of 92.15 on the test split of the FUNSD dataset.
[Paper](https://arxiv.org/pdf/2204.08387.pdf) | [Code](https://aka.ms/layoutlmv3) | [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)
If you find LayoutLMv3 helpful, please cite the following paper:
```
@article{huang2022layoutlmv3,
title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking},
author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei},
journal={arXiv preprint arXiv:2204.08387},
year={2022}
}
```
## License
The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project.
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
|
OFA-Sys/OFA-tiny | 90e02933ae29b4daffd70199cf630760e9eb764b | 2022-07-25T11:52:22.000Z | [
"pytorch",
"ofa",
"transformers",
"license:apache-2.0"
] | null | false | OFA-Sys | null | OFA-Sys/OFA-tiny | 320 | 1 | transformers | 2,858 | ---
license: apache-2.0
---
# OFA-tiny
This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-tiny
```
After, refer the path to OFA-tiny to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 256
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
>>> # using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
>>> # using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
ganeshkharad/gk-hinglish-sentiment | 91bc6254790d67d83b498918799cf604178476e6 | 2021-05-19T17:01:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"hi-en",
"dataset:sail",
"transformers",
"sentiment",
"multilingual",
"hindi codemix",
"hinglish",
"license:apache-2.0"
] | text-classification | false | ganeshkharad | null | ganeshkharad/gk-hinglish-sentiment | 319 | 2 | transformers | 2,859 | ---
language:
- hi-en
tags:
- sentiment
- multilingual
- hindi codemix
- hinglish
license: apache-2.0
datasets:
- sail
---
# Sentiment Classification for hinglish text: `gk-hinglish-sentiment`
## Model description
Trained small amount of reviews dataset
## Intended uses & limitations
I wanted something to work well with hinglish data as it is being used in India mostly.
The training data was not much as expected
#### How to use
```python
#sample code
from transformers import BertTokenizer, BertForSequenceClassification
tokenizerg = BertTokenizer.from_pretrained("/content/model")
modelg = BertForSequenceClassification.from_pretrained("/content/model")
text = "kuch bhi type karo hinglish mai"
encoded_input = tokenizerg(text, return_tensors='pt')
output = modelg(**encoded_input)
print(output)
#output contains 3 lables LABEL_0 = Negative ,LABEL_1 = Nuetral ,LABEL_2 = Positive
```
#### Limitations and bias
The data contains only hinglish codemixed text it and was very much limited may be I will Update this model if I can get good amount of data
## Training data
Training data contains labeled data for 3 labels
link to the pre-trained model card with description of the pre-training data.
I have Tuned below model
https://huggingface.co/rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment
### BibTeX entry and citation info
```@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
huggingface-course/codeparrot-ds | c2d102b31841070de1e8928282e845caa63d124e | 2021-11-11T17:32:45.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | huggingface-course | null | huggingface-course/codeparrot-ds | 319 | null | transformers | 2,860 | Entry not found |
rbawden/modern_french_normalisation | 713121a5758884d53f46ceb01c2ec2c2e00177b2 | 2022-07-22T17:41:33.000Z | [
"pytorch",
"fsmt",
"fr",
"transformers",
"license:cc-by-4.0"
] | null | false | rbawden | null | rbawden/modern_french_normalisation | 319 | null | transformers | 2,861 | ---
language: fr
license: cc-by-4.0
---
# Modern French normalisation model
Normalisation model from Modern (17th c.) French to contemporary French. It was introduced in [this paper](https://hal.inria.fr/hal-03540226/) (see citation below). The main research repository can be found [here](https://github.com/rbawden/ModFr-Norm). If you use this model, please cite our research paper (see [below](#cite)).
## Model description
The normalisation model is trained on the [FreEM_norm corpus](https://freem-corpora.github.io/corpora/norm/), which is a parallel data of French texts from the 17th century and their manually normalised versions that follow contemporary French spelling. The model is a transformer model with 2 encoder layers, 4 decoder layers, embedding dimensions of size 256, feedforward dimension of 1024. The associated tokeniser is trained with SentencePiece and the BPE strategy with a BPE vocabulary of 1000 tokens.
### Intended uses & limitations
The model is designed to be used to normalise 17th c. French texts. The best performance can be seen on texts from similar genres as those produced within this century of French.
### How to use
The model is to be used with the custom pipeline available in in the original repository [here](https://github.com/rbawden/ModFr-Norm/blob/main/hf-conversion/pipeline.py) and in this repository [here](https://huggingface.co/rbawden/modern_french_normalisation/blob/main/pipeline.py). You first need to download the pipeline file so that you can use it locally (since it is not integrated into HuggingFace).
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from pipeline import NormalisationPipeline # N.B. local file
cache_lexicon_path="~/.normalisation_lex.pickle" # optionally set a path to store the processed lexicon (speeds up loading)
tokeniser = AutoTokenizer.from_pretrained("rbawden/modern_french_normalisation")
model = AutoModelForSeq2SeqLM.from_pretrained("rbawden/modern_french_normalisation")
norm_pipeline = NormalisationPipeline(model=model, tokenizer=tokeniser, batch_size=32, beam_size=5, cache_file=cache_lexicon_path)
list_inputs = ["Elle haïſſoit particulierement le Cardinal de Lorraine;", "Adieu, i'iray chez vous tantoſt vous rendre grace."]
list_outputs = norm_pipeline(list_inputs)
print(list_outputs)
>> [{'text': 'Elle haïssait particulièrement le Cardinal de Lorraine; ', 'alignment': [([0, 3], [0, 3]), ([5, 12], [5, 12]), ([14, 29], [14, 29]), ([31, 32], [31, 32]), ([34, 41], [34, 41]), ([43, 44], [43, 44]), ([46, 53], [46, 53]), ([54, 54], [54, 54])]}, {'text': "Adieu, j'irai chez vous tantôt vous rendre grâce. ", 'alignment': [([0, 4], [0, 4]), ([5, 5], [5, 5]), ([7, 8], [7, 8]), ([9, 12], [9, 12]), ([14, 17], [14, 17]), ([19, 22], [19, 22]), ([24, 30], [24, 29]), ([32, 35], [31, 34]), ([37, 42], [36, 41]), ([44, 48], [43, 47]), ([49, 49], [48, 48])]}]
```
### Limitations and bias
The model has been learnt in a supervised fashion and therefore like any such model is likely to perform well on texts similar to those used for training and less well on other texts. Whilst care was taken to include a range of different domains from different periods in the 17th c. in the training data, there are nevertheless imbalances, notably with some decades (e.g. 1610s) being underrepresented.
The model reaches a high performance, but could in rare cases result in changes to the text other than those involving spelling conventions (e.g. changing words, deleting or hallucinating words). A post-processing step is introduced in the pipeline file to avoid these problems, which involves a look-up in a contemporary French lexicon ([The Le*fff*](http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html)) and checks to make sure that the normalised words do not stray too far from the original source words.
## Training data
The model is trained on the parallel FreEM dataset [FreEM_norm corpus](https://freem-corpora.github.io/corpora/norm/), consisting of 17,930 training sentences and 2,443 development sentences (used for model selection).
## Training procedure
### Preprocessing
Texts are normalised (in terms of apostrophes, quotes and spaces), before being tokenised with SentencePiece and a vocabulary size of 1000. The inputs are of the form:
```
Sentence in Early Modern French </s>
```
where `</s>` is the end-of-sentence (eos) token.
### Training
The model was trained using [Fairseq](https://github.com/facebookresearch/fairseq) and ported to HuggingFace using an adapted version of [Stas's scripts for FSMT models](https://huggingface.co/blog/porting-fsmt).
### Evaluation results
Coming soon... (once post-processing extension has been finalised)
## BibTex entry and citation info
<a name="cite"></a>
Rachel Bawden, Jonathan Poinhos, Eleni Kogkitsidou, Philippe Gambette, Benoît Sagot and Simon Gabay. 2022. [Automatic Normalisation of Early Modern French](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.358.pdf). In Proceedings of the 13th Language Resources and Evaluation Conference. European Language Resources Association. Marseille, France.]
Bibtex:
```
@inproceedings{bawden-etal-2022-automatic,
title = {{Automatic Normalisation of Early Modern French}},
author = {Bawden, Rachel and Poinhos, Jonathan and Kogkitsidou, Eleni and Gambette, Philippe and Sagot, Beno{\^i}t and Gabay, Simon},
url = {https://hal.inria.fr/hal-03540226},
booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference},
publisher = {European Language Resources Association},
year = {2022},
address = {Marseille, France},
pages = {3354--3366},
url = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.358.pdf}
}
```
And to reference the FreEM-norm dataset used in the experiments:
Simon Gabay. (2022). FreEM-corpora/FreEMnorm: FreEM norm Parallel corpus (1.0.0). Zenodo. https://doi.org/10.5281/zenodo.5865428
```
@software{simon_gabay_2022_5865428,
author = {Simon Gabay},
title = {{FreEM-corpora/FreEMnorm: FreEM norm Parallel
corpus}},
month = jan,
year = 2022,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.5865428},
url = {https://doi.org/10.5281/zenodo.5865428}
}
|
crodri/roberta-base-ca-v2-qa-catalanqa | ecb8038762486ab0039d1a1224ad52e87b5720e9 | 2022-06-14T06:11:33.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"license:cc0-1.0",
"autotrain_compatible"
] | question-answering | false | crodri | null | crodri/roberta-base-ca-v2-qa-catalanqa | 319 | null | transformers | 2,862 | ---
license: cc0-1.0
---
The roberta-base-ca-cased-qa is a Question Answering (QA) model for the Catalan language fine-tuned from the BERTa model, a RoBERTa base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
Datasets
We used the Catalan QA datasets called ViquiQuAD, VilaQuad and XQuad\_ca with test, training and evaluation (90-10-10) splits, balanced by type of questions.
Test: 2255
Evaluation: 2276
Train: 18082 |
Erikaka/DialoGPT-small-loki | 7e6f8ce678ddb35c5f5124c65742f6e15cf9b04b | 2021-09-10T13:32:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Erikaka | null | Erikaka/DialoGPT-small-loki | 318 | null | transformers | 2,863 | ---
tags:
- conversational
---
#Loki DialoGPT Model |
Helsinki-NLP/opus-mt-fr-ru | 523f08bfff357bd88e348d2acafe1743d3540829 | 2021-09-09T21:56:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ru | 318 | null | transformers | 2,864 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ru
* source languages: fr
* target languages: ru
* OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ru | 37.9 | 0.585 |
|
SaffronIce/DialoGPT-medium-Jett | 3beff10cb5ef69497a82b12a43cfc86e160cf55a | 2021-08-26T22:49:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SaffronIce | null | SaffronIce/DialoGPT-medium-Jett | 318 | null | transformers | 2,865 | ---
tags:
- conversational
---
# Jett DialoGPT Model |
Sin/DialoGPT-small-zai | 18d31352c1ec83c5c09ad1231edc97acc704c946 | 2021-10-21T23:21:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Sin | null | Sin/DialoGPT-small-zai | 318 | null | transformers | 2,866 | conver = pipeline("conversational")
---
tags:
- conversational
---
# Harry potter DialoGPT model |
cross-encoder/nli-deberta-v3-large | 52fab31a566138fbd1f6833a4efc1199f875f05e | 2021-12-28T19:10:37.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"transformers",
"microsoft/deberta-v3-large",
"license:apache-2.0",
"zero-shot-classification"
] | zero-shot-classification | false | cross-encoder | null | cross-encoder/nli-deberta-v3-large | 318 | 3 | transformers | 2,867 | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-large
datasets:
- multi_nli
- snli
metrics:
- accuracy
license: apache-2.0
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.20
- Accuracy on MNLI mismatched set: 90.49
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
``` |
rrtong/DialoGPT-medium-shang-chi | 6372e6c436939d3830a01dc2ca2c5a11e12b3549 | 2021-11-14T23:21:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rrtong | null | rrtong/DialoGPT-medium-shang-chi | 318 | null | transformers | 2,868 | ---
tags:
- conversational
---
# Shang-Chi DialoGPT Model |
textattack/distilbert-base-cased-MRPC | 1dc1638baffb2f564b6b196e5da1d9c348d80e2a | 2020-06-09T16:46:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-cased-MRPC | 318 | null | transformers | 2,869 | Entry not found |
keans/DialoGPT-small-highjacker | 2f4ea7eb57f4692d8c8af42ee300b964fa34cc88 | 2022-05-29T15:59:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | keans | null | keans/DialoGPT-small-highjacker | 318 | null | transformers | 2,870 | ---
tags:
- conversational
---
#HighJacker DialoGPT Model |
nickmuchi/yolos-small-finetuned-masks | e49a5f1375e8f5d49078d4e8e265930e961b3942 | 2022-06-18T12:54:06.000Z | [
"pytorch",
"yolos",
"object-detection",
"dataset:coco",
"dataset:face-mask-detection",
"arxiv:2106.00666",
"transformers",
"face-mask-detection",
"license:apache-2.0",
"model-index"
] | object-detection | false | nickmuchi | null | nickmuchi/yolos-small-finetuned-masks | 318 | null | transformers | 2,871 | ---
license: apache-2.0
tags:
- object-detection
- face-mask-detection
datasets:
- coco
- face-mask-detection
widget:
- src: https://drive.google.com/uc?id=1VwYLbGak5c-2P5qdvfWVOeg7DTDYPbro
example_title: "City Folk"
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: "Football Match"
metrics:
- average precision
- recall
- IOU
model-index:
- name: yolos-small-finetuned-masks
results: []
---
# YOLOS (small-sized) model
The original YOLOS model was fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
This model was further fine-tuned on the [face mask dataset]("https://www.kaggle.com/datasets/andrewmvd/face-mask-detection") from Kaggle. The dataset consists of 853 images of people with annotations categorised as "with mask","without mask" and "mask not worn correctly". The model was trained for 200 epochs on a single GPU usins Google Colab
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'https://drive.google.com/uc?id=1VwYLbGak5c-2P5qdvfWVOeg7DTDYPbro'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('nickmuchi/yolos-small-finetuned-masks')
model = YolosForObjectDetection.from_pretrained('nickmuchi/yolos-small-finetuned-masks')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding face mask detection classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
This model was fine-tuned for 200 epochs on the [face-mask-dataset]("https://www.kaggle.com/datasets/andrewmvd/face-mask-detection").
## Evaluation results
This model achieves an AP (average precision) of **53.2**.
Accumulating evaluation results...
IoU metric: bbox
Metrics | Metric Parameter | Location | Dets | Value |
---------------- | --------------------- | ------------| ------------- | ----- |
Average Precision | (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.273 |
Average Precision | (AP) @[ IoU=0.50 | area= all | maxDets=100 ] | 0.532 |
Average Precision | (AP) @[ IoU=0.75 | area= all | maxDets=100 ] | 0.257 |
Average Precision | (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.220 |
Average Precision | (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.341 |
Average Precision | (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.545 |
Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] | 0.154 |
Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] | 0.361 |
Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.415 |
Average Recall | (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.349 |
Average Recall | (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.469 |
Average Recall | (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.584 |
|
2early4coffee/DialoGPT-medium-deadpool | 5051f8da40e7f85fe09a591c233deaf913f1c8e3 | 2021-10-30T20:46:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | 2early4coffee | null | 2early4coffee/DialoGPT-medium-deadpool | 317 | null | transformers | 2,872 | ---
tags:
- conversational
---
# Deadpool DialoGPT Model |
IlyaGusev/rut5_base_headline_gen_telegram | 7ee21f469a71b19a107eef60766bba8ad29bdbb7 | 2021-12-18T19:27:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | IlyaGusev | null | IlyaGusev/rut5_base_headline_gen_telegram | 317 | 1 | transformers | 2,873 | ---
language:
- ru
tags:
- summarization
license: apache-2.0
widget:
- text: "Комиссия Совета Федерации по информационной политике и взаимодействию со СМИ совместно с заинтересованными ведомствами думает над разработкой национального законодательства в области налогообложения глобальных интернет-компаний, таких как Google и Facebook. Об этом сообщил ТАСС председатель комиссии Алексей Пушков. «В настоящее время по линии ОЭСР [Организация экономического сотрудничества и развития] ведется разработка международной конвенции, однако работа над ней еще не завершена. В этих условиях мы исходим из того, что самая разумная позиция - начать разработку национального законодательства, не дожидаясь конвенции», — пояснил сенатор. Пушков отметил, что по такому пути пошли еще несколько стран, в числе которых Франция, Австралия и Турция. По его словам, в России важно задействовать в этой работе Минфин, ФНС, МИД РФ и Роскомнадзор. «Интернет-платформы не фигурируют у нас сейчас как отдельный объект налогообложения. Когда они откроют в России свои представительства в рамках закона о «приземлении», возникнет вопрос: как их официальное присутствие на территории России, которого сейчас нет, будет соотноситься с нашим налоговым режимом. Мы сейчас продумываем, как установить эту взаимосвязь», — сказал Пушков, добавляя, что вопрос внесения изменений в российское законодательство в части налогообложения крупных IT-компаний находится «на первой стадии изучения». Сам сенатор выступает за введение прогрессивной ставки налога в зависимости от прибыли IT-компаний на территории страны. При этом, подчеркнул он, одна из задач национальной системы налогообложения будет заключаться в подсчете налогооблагаемой базы. Сейчас крупные ИТ-компании самостоятельно отчитываются о своей прибыли. Однако России нужна собственная система подсчета их доходов, которая позволит определить их «реальную налогооблагаемую базу», считает Пушков. (https://www.gazeta.ru/tech/news/2021/12/17/n_17024239.shtml)"
example_title: "Новость про налоги в IT"
- text: "Первую многоножку, у которой более тысячи ног, обнаружили в австралийских пещерах биологи, изучавшие там подземные воды. Предыдущей рекордсменкой по количеству ног была 700-ногая многоножка. Новый вид имеет длинное тонкое тело, похожее на нить, и большое количество конечностей, по-видимому, дает преимущества для быстрого перемещения и проникновения в труднодоступные места — ученые полагают, такая многоножка может спокойно перемещаться по трещинам в камнях. Австралия известна своими огромными и жутковатыми животными вроде 25-сантиметровых пауков. Теперь список пугающих членистоногих пополнился самой «многоногой» в мире многоножкой, у которой более тысячи ног. Необычное животное обнаружила группа исследователей из Австралии и США в пещерах на западе страны. Подробнее многоножку ученые описали в статье в журнале Scientific Reports. Исследователи занимались оценкой воздействия подземных вод на окружающую среду в зоне добычи полезных ископаемых на западе страны, когда наткнулись на новый вид многоножек. В отличие от большинства сородичей, живущих на поверхности, эти многоножки обитали в пещерах на глубине до 60 метров. Новый вид исследователи назвали Eumillipes persephone, в честь Персефоны — древнегреческой богини подземного мира. У многоножки оказалось 1306 ног — больше, чем у любого другого известного вида. Предыдущей рекордсменкой была калифорнийская Illacme plenipes, у которой насчитывалось до 750 ног. «Эти животные были настолько уникальны, — говорит биолог Бруно Бузатто. — Как только я понял, какой длины они были... Стало ясно, что это что-то совершенно новое». У Е. persephone нитевидное тело длиной около 9,5 см и шириной всего миллиметр, состоящее из 330 сегментов, короткие ноги и конусообразная голова. Как и другие животные, живущие в постоянной темноте, эти многоножки бледны и слепы. Энтомолог Пол Марек сравнивает ее с белой нитью, выдернутой из рубашки. Чтобы посчитать количество ног, ученым пришлось сначала снять многоножку в высоком разрешении, а затем закрашивать на фото каждый десяток ног другим цветом. (https://www.gazeta.ru/science/2021/12/17_a_14325355.shtml)"
example_title: "Новость про многоножку"
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."
example_title: "Википедия"
---
# RuT5TelegramHeadlines
## Model description
Based on [rut5-base](https://huggingface.co/cointegrated/rut5-base) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_headline_gen_telegram"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py) |
Royce23/DialoGPT-small-almas | a326ba36df63aba525834eda4d2c089daf426884 | 2021-08-16T05:21:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Royce23 | null | Royce23/DialoGPT-small-almas | 317 | null | transformers | 2,874 | ---
tags:
- conversational
---
# Almas DialoGPT Model |
cointegrated/rut5-small | 891938b86cc5cfe1862c15241962e2adea86f0c1 | 2021-06-23T15:06:46.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"ru",
"transformers",
"paraphrasing",
"russian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | cointegrated | null | cointegrated/rut5-small | 317 | 1 | transformers | 2,875 | ---
language: "ru"
tags:
- paraphrasing
- russian
license: mit
---
This is a small Russian paraphraser based on the [google/mt5-small](https://huggingface.co/google/mt5-small) model.
It has rather poor paraphrasing performance, but can be fine tuned for this or other tasks.
This model was created by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent.
* The original model has 300M parameters, with 256M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced to 65M parameters, and model size reduced from 1.1GB to 246MB.
* The first 5K tokens in the new vocabulary are taken from the original `mt5-small`.
* The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the [Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian).
The model can be used as follows:
```
# !pip install transformers sentencepiece
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small")
model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small")
text = 'Ехал Грека через реку, видит Грека в реке рак. '
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.95, num_return_sequences=10,
repetition_penalty=2.5,
max_length=32,
)
for h in hypotheses:
print(tokenizer.decode(h, skip_special_tokens=True))
```
|
facebook/data2vec-audio-base | 1f5a6404ec3d696d09d505c66b828124b70d50c8 | 2022-04-19T17:24:37.000Z | [
"pytorch",
"data2vec-audio",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | facebook | null | facebook/data2vec-audio-base | 317 | 2 | transformers | 2,876 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Base
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
ydshieh/vit-gpt2-coco-en-ckpts | 70c64cdd3d4004b5f2d6faa9268aaf87726240d2 | 2021-10-24T12:01:42.000Z | [
"pytorch",
"jax",
"tensorboard",
"vision-encoder-decoder",
"generic",
"image-classification"
] | image-classification | false | ydshieh | null | ydshieh/vit-gpt2-coco-en-ckpts | 317 | 3 | generic | 2,877 | ---
tags:
- image-classification
library_name: generic
---
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
```python
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values
def generate_step(pixel_values):
output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
preds = generate_step(pixel_values)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
``` |
Doquey/DialoGPT-small-Michaelbot | 0dd4c42749efd3cd8f3a966715e3272f38ff03ce | 2021-09-02T22:44:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Doquey | null | Doquey/DialoGPT-small-Michaelbot | 316 | null | transformers | 2,878 | ---
tags: conversational
---
#Michael |
Piumi/DialogGPT-small-harrypotter | 4f2fb6a2e4ca698266b4a98b3e11b1bb1661dc53 | 2021-09-05T08:59:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Piumi | null | Piumi/DialogGPT-small-harrypotter | 316 | null | transformers | 2,879 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
banden/DialoGPT-small-LokiBot | bf5619e24ca2ed9c04fbfcae1483beed9131655a | 2021-09-23T13:33:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | banden | null | banden/DialoGPT-small-LokiBot | 316 | null | transformers | 2,880 | ---
tags:
- conversational
---
# Loki DialoGPT Model |
microsoft/unispeech-sat-large | dde3cc42bd59e14fde1461a460da59d885aec651 | 2021-12-14T19:17:12.000Z | [
"pytorch",
"unispeech-sat",
"pretraining",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.05752",
"transformers",
"speech"
] | null | false | microsoft | null | microsoft/unispeech-sat-large | 316 | null | transformers | 2,881 | ---
language:
- en
datasets:
tags:
- speech
---
# UniSpeech-SAT-Large
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The large model pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Usage
This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on task such as speaker verification, speaker identification, and speaker diarization.
**Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
of phonemes before fine-tuning.
## Speech Recognition
To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition).
## Speech Classification
To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
## Speaker Verification
TODO
## Speaker Diarization
TODO
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
Lizardon/Peterbot | 41e3dc6587a102b3268be16134bd464f1276d468 | 2022-03-02T04:56:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Lizardon | null | Lizardon/Peterbot | 316 | null | transformers | 2,882 | ---
tags:
- conversational
---
# Peter from Your Boyfriend Game.
|
Havokx/DialoGPT-small-Rick | 51796731c265f420e8cde66ac08198cb82901091 | 2021-08-31T23:00:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Havokx | null | Havokx/DialoGPT-small-Rick | 315 | null | transformers | 2,883 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model
|
aishanisingh/DiagloGPT-small-michaelscott | fee94bcfc4d50243bdfc8faed1c6f31c3ca06c0c | 2022-02-13T07:28:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aishanisingh | null | aishanisingh/DiagloGPT-small-michaelscott | 315 | null | transformers | 2,884 | ---
tags:
- conversational
---
# Michael Scott DialoGPT Model |
madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1 | 48c6ee3052eab2fb4c7fa7412be10e665d09be50 | 2021-05-19T22:33:15.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2005.07683",
"transformers",
"bert-base",
"license:mit",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1 | 315 | null | transformers | 2,885 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
- bert
- bert-base
datasets:
- squad
metrics:
- squad
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## BERT-base uncased model fine-tuned on SQuAD v1
This model is block sparse: the **linear** layers contains **20.2%** of the original weights.
The model contains **38.1%** of the original weights **overall**.
The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method.
That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.39x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below).
This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
This model is case-insensitive: it does not make a difference between english and English.
## Pruning details
A side-effect of the block pruning is that some of the attention heads are completely removed: 90 heads were removed on a total of 144 (62.5%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.

## Density plot
<script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1/raw/main/model_card/density.js" id="ddbad516-679a-400d-9e28-0182fd89b188"></script>
## Details
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `347M` (original BERT: `438M`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))|
| ------ | --------- | --------- |
| **EM** | **76.98** | **80.8** |
| **F1** | **85.45** | **88.5** |
## Example Usage
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1",
tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.20-v1"
)
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print(predictions)
``` |
persiannlp/mt5-base-parsinlu-sentiment-analysis | 91d1db80c78f791fa6c24a3775c3847ec7994c2c | 2021-09-23T16:20:02.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"sentiment",
"sentiment-analysis",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-sentiment-analysis | 315 | null | transformers | 2,886 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- sentiment
- sentiment-analysis
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Sentiment Analysis (آنالیز احساسات)
This is a mT5 model for sentiment analysis.
Here is an example of how you can run this model:
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
import numpy as np
model_name_or_path = "persiannlp/mt5-base-parsinlu-sentiment-analysis"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def model_predict(text_a, text_b):
features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt')
output = model(**features)
logits = output[0]
probs = torch.nn.functional.softmax(logits, dim=1).tolist()
idx = np.argmax(np.array(probs))
print(labels[idx], probs)
def run_model(context, query, **generator_args):
input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .",
"نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟"
)
run_model(
"فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...",
"نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟"
)
run_model(
"اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن....... (ノಠ益ಠ)ノ ",
" نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخرهباز چیست؟"
)
run_model(
" گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ",
" نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟"
)
run_model(
"در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب",
" شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟"
)
run_model(
"من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ",
"نظر شما به صورت کلی در مورد این شیر چیست؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
microsoft/resnet-152 | 26958355fd2ec78b3e07b7094b8778c242907844 | 2022-07-01T17:33:09.000Z | [
"pytorch",
"tf",
"resnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/resnet-152 | 315 | null | transformers | 2,887 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-152 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet (Residual Network) is a convolutional neural network that democratized the concepts of residual learning and skip connections. This enables to train much deeper models.
This is ResNet v1.5, which differs from the original model: in the bottleneck blocks which require downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution. This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec) according to [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch).

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ResNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-152")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-152")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/resnet).
### BibTeX entry and citation info
```bibtex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
```
|
akshatpandeyme/DialoGPT-small-manpreet | c230949a55a5060e1e1578a2f5e58c6bd3713a1f | 2022-07-26T11:03:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | akshatpandeyme | null | akshatpandeyme/DialoGPT-small-manpreet | 315 | null | transformers | 2,888 | ---
tags:
- conversational
---
# manpreet DialoGPT Model |
AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 | 7c1614a46aa544de7d3bfecf05de0734cc72d0ed | 2021-07-13T14:12:45.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"transformers",
"sentence-similarity"
] | sentence-similarity | false | AIDA-UPM | null | AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 | 314 | 2 | transformers | 2,889 | ---
pipeline_tag: sentence-similarity
language: "multilingual"
tags:
- feature-extraction
- sentence-similarity
- transformers
- multilingual
---
# mstsb-paraphrase-multilingual-mpnet-base-v2
This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering, semantic search and measuring the similarity between two sentences.
<!--- Describe your model here -->
This model is fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` for semantic textual similarity with multilingual data. The dataset used for this fine-tuning is STSb extended to 15 languages with Google Translator. For mantaining data quality the sentence pairs with a confidence value below 0.7 were dropped. The extended dataset is available at [GitHub](https://github.com/Huertas97/Multilingual-STSB). The languages included in the extended version are: ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW. The pooling operation used to condense the word embeddings into a sentence embedding is mean pooling (more info below).
<!-- ## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
# It support several languages
sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"]
# The pooling technique is automatically detected (mean pooling)
model = SentenceTransformer('mstsb-paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
``` -->
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# We should define the proper pooling function: Mean pooling
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["This is an example sentence", "Esta es otra frase de ejemplo", "最後の例文"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2')
model = AutoModel.from_pretrained('AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Check the test results in the Semantic Textual Similarity Tasks. The 15 languages available at the [Multilingual STSB](https://github.com/Huertas97/Multilingual-STSB) have been combined into monolingual and cross-lingual tasks, giving a total of 31 tasks. Monolingual tasks have both sentences from the same language source (e.g., Ar-Ar, Es-Es), while cross-lingual tasks have two sentences, each in a different language being one of them English (e.g., en-ar, en-es).
Here we compare the average multilingual semantic textual similairty capabilities between the `paraphrase-multilingual-mpnet-base-v2` based model and the `mstsb-paraphrase-multilingual-mpnet-base-v2` fine-tuned model across the 31 tasks. It is worth noting that both models are multilingual, but the second model is adjusted with multilingual data for semantic similarity. The average of correlation coefficients is computed by transforming each correlation coefficient to a Fisher's z value, averaging them, and then back-transforming to a correlation coefficient.
| Model | Average Spearman Cosine Test |
|:---------------------------------------------:|:------------------------------:|
| mstsb-paraphrase-multilingual-mpnet-base-v2 | 0.835890 |
| paraphrase-multilingual-mpnet-base-v2 | 0.818896 |
<br>
The following tables breakdown the performance of `mstsb-paraphrase-multilingual-mpnet-base-v2` according to the different tasks. For the sake of readability tasks have been splitted into monolingual and cross-lingual tasks.
| Monolingual Task | Pearson Cosine test | Spearman Cosine test |
|:------------------:|:---------------------:|:-----------------------:|
| en;en | 0.868048310692506 | 0.8740170943535747 |
| ar;ar | 0.8267139454193487 | 0.8284459741532022 |
| cs;cs | 0.8466821720942157 | 0.8485417688803879 |
| de;de | 0.8517285961812183 | 0.8557680051557893 |
| es;es | 0.8519185309064691 | 0.8552243211580456 |
| fr;fr | 0.8430951067985064 | 0.8466614534379704 |
| hi;hi | 0.8178258630578092 | 0.8176462079184331 |
| it;it | 0.8475909574305637 | 0.8494216064459076 |
| ja;ja | 0.8435588859386477 | 0.8456031494178619 |
| nl;nl | 0.8486765104527032 | 0.8520856765262531 |
| pl;pl | 0.8407840177883407 | 0.8443070467300299 |
| pt;pt | 0.8534880178249296 | 0.8578544068829622 |
| ru;ru | 0.8390897585455678 | 0.8423041443534423 |
| tr;tr | 0.8382125451820572 | 0.8421587450058385 |
| zh-CN;zh-CN | 0.826233678946644 | 0.8248515460782744 |
| zh-TW;zh-TW | 0.8242683809675422 | 0.8235506799952028 |
<br>
| Cross-lingual Task | Pearson Cosine test | Spearman Cosine test |
|:--------------------:|:---------------------:|:-----------------------:|
| en;ar | 0.7990830340462535 | 0.7956792016468148 |
| en;cs | 0.8381274879061265 | 0.8388713450024455 |
| en;de | 0.8414439600928739 | 0.8441971698649943 |
| en;es | 0.8442337511356952 | 0.8445035292903559 |
| en;fr | 0.8378437644605063 | 0.8387903367907733 |
| en;hi | 0.7951955086055527 | 0.7905052217683244 |
| en;it | 0.8415686372978766 | 0.8419480899107785 |
| en;ja | 0.8094306665283388 | 0.8032512280936449 |
| en;nl | 0.8389526140129767 | 0.8409310421803277 |
| en;pl | 0.8261309163979578 | 0.825976253023656 |
| en;pt | 0.8475546209070765 | 0.8506606391790897 |
| en;ru | 0.8248514914263723 | 0.8224871183202255 |
| en;tr | 0.8191803661207868 | 0.8194200775744044 |
| en;zh-CN | 0.8147678083378249 | 0.8102089470690433 |
| en;zh-TW | 0.8107272160374955 | 0.8056129680510944 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 687 with parameters:
```
{'batch_size': 132, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 140,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Helsinki-NLP/opus-mt-it-es | 04994afcc6625a8c0e4b7867266edeee466fab01 | 2021-09-10T13:52:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-es | 314 | null | transformers | 2,890 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-it-es
* source languages: it
* target languages: es
* OPUS readme: [it-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.it.es | 61.2 | 0.761 |
|
huggingtweets/aimbotaimy-ladydarknest | b02fe5af0bad7a04b55646e7316fa43f10030664 | 2021-07-25T20:33:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aimbotaimy-ladydarknest | 314 | null | transformers | 2,891 | ---
language: en
thumbnail: https://www.huggingtweets.com/aimbotaimy-ladydarknest/1627245180529/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374872808136835072/hPahIg-A_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1409725677495009283/RPVDIGan_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AimbotAimy 🍞🔞 NSFW V-Tuber & Demon Lord Yeefi NSFW🔞</div>
<div style="text-align: center; font-size: 14px;">@aimbotaimy-ladydarknest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AimbotAimy 🍞🔞 NSFW V-Tuber & Demon Lord Yeefi NSFW🔞.
| Data | AimbotAimy 🍞🔞 NSFW V-Tuber | Demon Lord Yeefi NSFW🔞 |
| --- | --- | --- |
| Tweets downloaded | 528 | 3242 |
| Retweets | 61 | 957 |
| Short tweets | 130 | 392 |
| Tweets kept | 337 | 1893 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uz56dprc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aimbotaimy-ladydarknest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1di7czlx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1di7czlx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aimbotaimy-ladydarknest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shelb-doc/DialoGPT-medium-ash | 85ee677ed6f2b1039dd374a42cc0e86704886ee6 | 2021-09-13T17:50:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | shelb-doc | null | shelb-doc/DialoGPT-medium-ash | 314 | null | transformers | 2,892 | ---
tags:
- conversational
---
# Ash DialoGPT Model |
textattack/albert-base-v2-QQP | 8f4e2471d0571cc832713021ca531c052dadeba0 | 2020-07-06T16:30:55.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/albert-base-v2-QQP | 314 | null | transformers | 2,893 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9073707642839476, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
MayankGupta/DialoGPT-small-harrypotter | 1bbbdf232b766be07985bad8a2b3421c9f8bd9e2 | 2021-08-29T13:14:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MayankGupta | null | MayankGupta/DialoGPT-small-harrypotter | 313 | null | transformers | 2,894 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
arifbhrn/DialogGPT-small-Rickk | 112e62cea0c31ae948b5477cbacab16dc2a23585 | 2021-08-29T15:08:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | arifbhrn | null | arifbhrn/DialogGPT-small-Rickk | 313 | null | transformers | 2,895 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
huggingtweets/dealingporn | 274dd5f6ec23836d3eca405daa3c4cc22e47a2e2 | 2021-05-22T01:00:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dealingporn | 313 | null | transformers | 2,896 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1172237959032201219/8tZPfA9n_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">💟 Dealing Porn 💟 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@dealingporn bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dealingporn's tweets](https://twitter.com/dealingporn).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3213</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>2141</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1072</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/10cc3bw9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dealingporn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2d2vjn6v) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2d2vjn6v/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/dealingporn'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
tli8hf/unqover-roberta-large-squad | 179dcdc07eb92bac816fd430170aa64cb64231de | 2021-05-20T22:39:19.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-roberta-large-squad | 313 | null | transformers | 2,897 | Entry not found |
IDEA-CCNL/Erlangshen-Roberta-330M-Sentiment | 6954910fb6c3b2f0105ced733bdcb386562a3a9b | 2022-05-12T09:51:51.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"NLU",
"Sentiment",
"license:apache-2.0"
] | text-classification | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-Roberta-330M-Sentiment | 313 | null | transformers | 2,898 | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
inference: true
widget:
- text: "今天心情不好"
---
# Erlangshen-Roberta-330M-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 8 sentiment datasets in the Chinese domain for finetune, with a total of 227347 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext)
## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-Sentiment')
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-Sentiment')
text='今天心情不好'
output=model(torch.tensor([tokenizer.encode(text)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## Scores on downstream chinese tasks
| Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 |
| Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 |
| Erlangshen-MegatronBert-1.3B-Sentiment | 98.1 | 97.8 | 97 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
yihsuan/mt5_chinese_small | 338c78a9a4e6ff81d4e613a7b742a99b864b8f28 | 2022-04-26T06:36:56.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"zh",
"transformers",
"summarization",
"mT5",
"autotrain_compatible"
] | summarization | false | yihsuan | null | yihsuan/mt5_chinese_small | 313 | null | transformers | 2,899 | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
---
---
license: apache-2.0
tags:
- Summarization
metrics:
- rouge
model-index:
- name: best_model_test_0423_small
results: []
---
# best_model_test_0423_small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6341
- Rouge1: 18.7681
- Rouge2: 6.3762
- Rougel: 18.6081
- Rougelsum: 18.6173
- Gen Len: 22.1086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 5.8165 | 0.05 | 1000 | 3.6541 | 11.6734 | 3.9865 | 11.5734 | 11.5375 | 18.0056 |
| 4.306 | 0.1 | 2000 | 3.4291 | 12.0417 | 3.8419 | 11.9231 | 11.9223 | 16.8948 |
| 4.1091 | 0.16 | 3000 | 3.3643 | 13.661 | 4.5171 | 13.5123 | 13.5076 | 19.4016 |
| 3.9637 | 0.21 | 4000 | 3.2574 | 13.8443 | 4.1761 | 13.689 | 13.6927 | 18.4288 |
| 3.8205 | 0.26 | 5000 | 3.2434 | 13.5371 | 4.3639 | 13.3551 | 13.3552 | 21.5776 |
| 3.7262 | 0.31 | 6000 | 3.1690 | 14.3668 | 4.8048 | 14.2191 | 14.1906 | 21.5548 |
| 3.6887 | 0.36 | 7000 | 3.0657 | 14.3265 | 4.436 | 14.212 | 14.205 | 20.89 |
| 3.6337 | 0.42 | 8000 | 3.0318 | 14.6809 | 4.8345 | 14.5378 | 14.5331 | 20.3651 |
| 3.5443 | 0.47 | 9000 | 3.0554 | 15.3372 | 4.9163 | 15.1794 | 15.1781 | 21.7742 |
| 3.5203 | 0.52 | 10000 | 2.9793 | 14.9278 | 4.9656 | 14.7491 | 14.743 | 20.8113 |
| 3.4936 | 0.57 | 11000 | 3.0079 | 15.7705 | 5.1453 | 15.5582 | 15.5756 | 23.4274 |
| 3.4592 | 0.62 | 12000 | 2.9721 | 15.0201 | 5.1612 | 14.8508 | 14.8198 | 22.7007 |
| 3.377 | 0.67 | 13000 | 3.0112 | 15.9595 | 5.1133 | 15.78 | 15.7774 | 23.4427 |
| 3.4158 | 0.73 | 14000 | 2.9239 | 14.7984 | 5.051 | 14.6943 | 14.6581 | 21.6009 |
| 3.378 | 0.78 | 15000 | 2.8897 | 16.5128 | 5.1923 | 16.3523 | 16.3265 | 22.0828 |
| 3.3231 | 0.83 | 16000 | 2.9347 | 16.9997 | 5.5524 | 16.8534 | 16.8737 | 22.5807 |
| 3.3268 | 0.88 | 17000 | 2.9116 | 16.0261 | 5.4226 | 15.9234 | 15.914 | 23.6988 |
| 3.3127 | 0.93 | 18000 | 2.8610 | 16.6255 | 5.3554 | 16.4729 | 16.4569 | 22.9481 |
| 3.2664 | 0.99 | 19000 | 2.8606 | 17.7703 | 5.9475 | 17.6229 | 17.6259 | 23.4423 |
| 3.1718 | 1.04 | 20000 | 2.8764 | 17.301 | 5.6262 | 17.122 | 17.1104 | 23.0093 |
| 3.0987 | 1.09 | 21000 | 2.8282 | 16.4718 | 5.2077 | 16.3394 | 16.3401 | 20.9697 |
| 3.1486 | 1.14 | 22000 | 2.8235 | 18.5594 | 5.9469 | 18.3882 | 18.3799 | 22.7291 |
| 3.1435 | 1.19 | 23000 | 2.8261 | 18.111 | 6.0309 | 17.9593 | 17.9613 | 22.9612 |
| 3.1049 | 1.25 | 24000 | 2.8068 | 17.124 | 5.5675 | 16.9714 | 16.9876 | 22.5558 |
| 3.1357 | 1.3 | 25000 | 2.8014 | 17.3916 | 5.8671 | 17.2148 | 17.2502 | 23.0075 |
| 3.0904 | 1.35 | 26000 | 2.7790 | 17.419 | 5.6689 | 17.3125 | 17.3058 | 22.1492 |
| 3.0877 | 1.4 | 27000 | 2.7462 | 17.0605 | 5.4735 | 16.9414 | 16.9378 | 21.7522 |
| 3.0694 | 1.45 | 28000 | 2.7563 | 17.752 | 5.8889 | 17.5967 | 17.619 | 23.2005 |
| 3.0498 | 1.51 | 29000 | 2.7521 | 17.9056 | 5.7754 | 17.7624 | 17.7836 | 21.9369 |
| 3.0566 | 1.56 | 30000 | 2.7468 | 18.6531 | 6.0538 | 18.5397 | 18.5038 | 22.2358 |
| 3.0489 | 1.61 | 31000 | 2.7450 | 18.4869 | 5.9297 | 18.3139 | 18.3169 | 22.0108 |
| 3.0247 | 1.66 | 32000 | 2.7449 | 18.5192 | 5.9966 | 18.3721 | 18.3569 | 22.2071 |
| 2.9877 | 1.71 | 33000 | 2.7160 | 18.1655 | 5.9294 | 18.0304 | 18.0836 | 21.4595 |
| 3.0383 | 1.76 | 34000 | 2.7202 | 18.4959 | 6.2413 | 18.3363 | 18.3431 | 22.9732 |
| 3.041 | 1.82 | 35000 | 2.6948 | 17.5306 | 5.8119 | 17.4011 | 17.4149 | 21.9435 |
| 2.9285 | 1.87 | 36000 | 2.6957 | 18.6418 | 6.1394 | 18.514 | 18.4823 | 22.5174 |
| 3.0556 | 1.92 | 37000 | 2.7000 | 18.7387 | 6.0585 | 18.5761 | 18.574 | 22.9315 |
| 3.0033 | 1.97 | 38000 | 2.6974 | 17.9387 | 6.1387 | 17.8271 | 17.8111 | 22.4726 |
| 2.9207 | 2.02 | 39000 | 2.6998 | 18.6073 | 6.1906 | 18.3891 | 18.4103 | 23.0274 |
| 2.8922 | 2.08 | 40000 | 2.6798 | 18.4017 | 6.2244 | 18.2321 | 18.2296 | 22.0697 |
| 2.8938 | 2.13 | 41000 | 2.6666 | 18.8016 | 6.2066 | 18.6411 | 18.6353 | 21.7017 |
| 2.9124 | 2.18 | 42000 | 2.6606 | 18.7544 | 6.3533 | 18.5923 | 18.5739 | 21.4303 |
| 2.8597 | 2.23 | 43000 | 2.6947 | 18.8672 | 6.4526 | 18.7416 | 18.7482 | 22.3352 |
| 2.8435 | 2.28 | 44000 | 2.6738 | 18.9405 | 6.356 | 18.7791 | 18.7729 | 21.9081 |
| 2.8672 | 2.34 | 45000 | 2.6734 | 18.7509 | 6.3991 | 18.6175 | 18.5828 | 21.8869 |
| 2.899 | 2.39 | 46000 | 2.6575 | 18.5529 | 6.3489 | 18.4139 | 18.401 | 21.7694 |
| 2.8616 | 2.44 | 47000 | 2.6485 | 18.7563 | 6.268 | 18.6368 | 18.6253 | 21.5685 |
| 2.8937 | 2.49 | 48000 | 2.6486 | 18.6525 | 6.3426 | 18.5184 | 18.5129 | 22.3337 |
| 2.8446 | 2.54 | 49000 | 2.6572 | 18.6529 | 6.2655 | 18.4915 | 18.4764 | 22.3331 |
| 2.8676 | 2.59 | 50000 | 2.6608 | 19.0913 | 6.494 | 18.929 | 18.9233 | 22.132 |
| 2.8794 | 2.65 | 51000 | 2.6583 | 18.7648 | 6.459 | 18.6276 | 18.6125 | 22.2414 |
| 2.8836 | 2.7 | 52000 | 2.6512 | 18.7243 | 6.3865 | 18.5848 | 18.5763 | 22.2551 |
| 2.8174 | 2.75 | 53000 | 2.6409 | 18.9393 | 6.3914 | 18.7733 | 18.7715 | 22.1243 |
| 2.8494 | 2.8 | 54000 | 2.6396 | 18.6126 | 6.4389 | 18.4673 | 18.4516 | 21.7638 |
| 2.9025 | 2.85 | 55000 | 2.6341 | 18.7681 | 6.3762 | 18.6081 | 18.6173 | 22.1086 |
| 2.8754 | 2.91 | 56000 | 2.6388 | 19.0828 | 6.5203 | 18.9334 | 18.9285 | 22.3497 |
| 2.8489 | 2.96 | 57000 | 2.6375 | 18.9219 | 6.4922 | 18.763 | 18.7437 | 21.9321 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.