modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ai4bharat/IndicBARTSS | 4b2669d25bc24a46ad2501c2b759451b7a4a1a26 | 2022-03-15T05:48:12.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2109.02903",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/IndicBARTSS | 2,151 | 2 | transformers | 1,300 | IndicBARTSS is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBARTSS model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBARTSS are:
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li>
<li> Unlike ai4bharat/IndicBART each language is written in its own script so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
You can read more about IndicBARTSS in this <a href="https://arxiv.org/abs/2109.02903">paper</a>.
For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/
# Pre-training corpus
We used the <a href="https://indicnlp.ai4bharat.org/corpora/">IndicCorp</a> data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.
# Usage:
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBARTSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBARTSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBARTSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBARTSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBARTSS was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
out = tokenizer("<2hi> मैं एक लड़का हूँ </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 942, 43, 32720, 8384, 64001]])
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # I am a boy
# What if we mask?
inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # I am happy
inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # मैं जानता हूँ
inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # मला ओळखलं पाहिजे
```
# Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
# Fine-tuning on a downstream task
1. If you wish to fine-tune this model, then you can do so using the <a href="https://github.com/prajdabre/yanmtt">YANMTT</a> toolkit, following the instructions <a href="https://github.com/AI4Bharat/indic-bart ">here</a>.
2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation">translation</a> and <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization">summarization</a>.
# Contributors
<ul>
<li> Raj Dabre </li>
<li> Himani Shrotriya </li>
<li> Anoop Kunchukuttan </li>
<li> Ratish Puduppully </li>
<li> Mitesh M. Khapra </li>
<li> Pratyush Kumar </li>
</ul>
# Paper
If you use IndicBARTSS, please cite the following paper:
```
@misc{dabre2021indicbart,
title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
year={2021},
eprint={2109.02903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# License
The model is available under the MIT License. |
zjukg/OntoProtein | d27d23e56e1b565958a5016eaf82847fa08a427a | 2022-04-12T14:42:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"protein",
"dataset:ProteinKG25",
"transformers",
"protein language model",
"autotrain_compatible"
] | fill-mask | false | zjukg | null | zjukg/OntoProtein | 2,150 | 3 | transformers | 1,301 | ---
language: protein
tags:
- protein language model
datasets:
- ProteinKG25
widget:
- text: "D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T"
---
# OntoProtein model
Pretrained model on protein sequences using masked language modeling (MLM) and knowledge embedding (KE) objective objective. It was introduced in [this paper](https://openreview.net/pdf?id=yfe1VMYAXa4) and first released in [this repository](https://github.com/zjunlp/OntoProtein). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
## Model description
OntoProtein is the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training.
### BibTeX entry and citation info
```bibtex
@article{zhang2022ontoprotein,
title={OntoProtein: Protein Pretraining With Gene Ontology Embedding},
author={Zhang, Ningyu and Bi, Zhen and Liang, Xiaozhuan and Cheng, Siyuan and Hong, Haosen and Deng, Shumin and Lian, Jiazhang and Zhang, Qiang and Chen, Huajun},
journal={arXiv preprint arXiv:2201.11147},
year={2022}
}
```
|
prajjwal1/bert-medium-mnli | 82e4a3118f63cba6e97875aa1b7e6a674a193063 | 2021-10-05T17:56:07.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"arxiv:2110.01518",
"transformers"
] | text-classification | false | prajjwal1 | null | prajjwal1/bert-medium-mnli | 2,149 | null | transformers | 1,302 | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
```
MNLI: 75.86%
MNLI-mm: 77.03%
```
These models are trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
sentence-transformers/all-MiniLM-L12-v1 | c8f1d5b49a00a0b0025e540ceca2c38101fc926f | 2021-08-30T20:01:21.000Z | [
"pytorch",
"bert",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/all-MiniLM-L12-v1 | 2,148 | 2 | sentence-transformers | 1,303 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L12-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v1')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L12-v1)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 128 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased). Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,124,818,467** | |
shahrukhx01/roberta-base-boolq | 87b8505e8f651d5aadedb50ea6737871a45a83b8 | 2022-06-02T08:36:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers",
"boolean-qa"
] | text-classification | false | shahrukhx01 | null | shahrukhx01/roberta-base-boolq | 2,147 | null | transformers | 1,304 | ---
language: "en"
tags:
- boolean-qa
widget:
- text: "Is Berlin the smallest city of Germany? <s> Berlin is the capital and largest city of Germany by both area and population. Its 3.8 million inhabitants make it the European Union's most populous city, according to the population within city limits "
---
# Labels Map
LABEL_0 => **"NO"** <br/>
LABEL_1 => **"YES"**
```python
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
)
model = AutoModelForSequenceClassification.from_pretrained("shahrukhx01/roberta-base-boolq")
model.to(device)
#model.push_to_hub("roberta-base-boolq")
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/roberta-base-boolq")
def predict(question, passage):
sequence = tokenizer.encode_plus(question, passage, return_tensors="pt")['input_ids'].to(device)
logits = model(sequence)[0]
probabilities = torch.softmax(logits, dim=1).detach().cpu().tolist()[0]
proba_yes = round(probabilities[1], 2)
proba_no = round(probabilities[0], 2)
print(f"Question: {question}, Yes: {proba_yes}, No: {proba_no}")
passage = """Berlin is the capital and largest city of Germany by both area and population. Its 3.8 million inhabitants make it the European Union's most populous city,
according to the population within city limits."""
question = "Is Berlin the smallest city of Germany?"
predict(s_question, passage)
```
|
artemnech/enrut5-base | 13523cbed3ee1390197d050cc52b0e6f9aa3ea45 | 2022-07-25T05:17:35.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ru",
"en",
"transformers",
"russian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | artemnech | null | artemnech/enrut5-base | 2,146 | null | transformers | 1,305 | ---
language: ["ru", "en"]
tags:
- russian
license: mit
widget:
- text: "translate ru to en: Интересный момент. Модель не видела русских диалогов, но может их понимать"
---
This pruned model of mt5-base [google/mt5-base](https://huggingface.co/google/mt5-base) with only some Rusian and English embeddings left.
The model has been fine-tuned for several tasks:
* translation (opus100 dataset)
* dialog (daily dialog dataset)
How to use:
```
# !pip install transformers sentencepiece
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5Tokenizer
import torch
model_name = 'artemnech/enrut5-base'
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
def generate(text, **kwargs):
model.eval()
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, **kwargs)
return tokenizer.decode(hypotheses[0], skip_special_tokens=True)
print(generate('translate ru to en: Интересный момент. Модель не видела русских диалогов, но может их понимать', num_beams=4,))
# The Model didn't see Russian dialogues, but can understand them.
print(generate("translate en to ru: The Model didn't see Russian dialogues, but can understand them.", num_beams=4,))
# Модель не видела русских диалога, но может понимать их.
print(generate('dialog: user1>>: Hello', num_beams=2))
# Hi
print(generate('dialog: user1>>: Hello user2>>: Hi user1>>: Would you like to drink something?', num_beams=2))
# I'd like to drink a cup of coffee.
#An interesting point. The model has not seen Russian dialogues, but can understand them
print(generate('dialog: user1>>: Привет'))
# Hi
print(generate('dialog: user1>>: Привет user2>>: Hi user1>>: Хочешь выпить что-нибудь?', num_beams=2))
# I'd like to have a cup of coffee.
```
|
whaleloops/phrase-bert | 6f68f4dc2d28aadefa038c79023dc7dfd51f6495 | 2021-11-03T15:04:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2109.06304",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | whaleloops | null | whaleloops/phrase-bert | 2,144 | 5 | sentence-transformers | 1,306 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# whaleloops/phrase-bert
This is the official repository for the EMNLP 2021 long paper [Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration](https://arxiv.org/abs/2109.06304). We provide [code](https://github.com/sf-wa-326/phrase-bert-topic-model) for training and evaluating Phrase-BERT in addition to the datasets used in the paper.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Our model is tested on pytorch=1.9.0, tranformers=4.8.1, sentence-tranformers = 2.1.0 TODO
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
phrase_list = [ 'play an active role', 'participate actively', 'active lifestyle']
model = SentenceTransformer('whaleloops/phrase-bert')
phrase_embs = model.encode( phrase_list )
[p1, p2, p3] = phrase_embs
```
As in sentence-BERT, the default output is a list of numpy arrays:
````
for phrase, embedding in zip(phrase_list, phrase_embs):
print("Phrase:", phrase)
print("Embedding:", embedding)
print("")
````
An example of computing the dot product of phrase embeddings:
````
import numpy as np
print(f'The dot product between phrase 1 and 2 is: {np.dot(p1, p2)}')
print(f'The dot product between phrase 1 and 3 is: {np.dot(p1, p3)}')
print(f'The dot product between phrase 2 and 3 is: {np.dot(p2, p3)}')
````
An example of computing cosine similarity of phrase embeddings:
````
import torch
from torch import nn
cos_sim = nn.CosineSimilarity(dim=0)
print(f'The cosine similarity between phrase 1 and 2 is: {cos_sim( torch.tensor(p1), torch.tensor(p2))}')
print(f'The cosine similarity between phrase 1 and 3 is: {cos_sim( torch.tensor(p1), torch.tensor(p3))}')
print(f'The cosine similarity between phrase 2 and 3 is: {cos_sim( torch.tensor(p2), torch.tensor(p3))}')
````
The output should look like:
````
The dot product between phrase 1 and 2 is: 218.43600463867188
The dot product between phrase 1 and 3 is: 165.48483276367188
The dot product between phrase 2 and 3 is: 160.51708984375
The cosine similarity between phrase 1 and 2 is: 0.8142536282539368
The cosine similarity between phrase 1 and 3 is: 0.6130303144454956
The cosine similarity between phrase 2 and 3 is: 0.584893524646759
````
## Evaluation
Given the lack of a unified phrase embedding evaluation benchmark, we collect the following five phrase semantics evaluation tasks, which are described further in our paper:
* Turney [[Download](https://storage.googleapis.com/phrase-bert/turney/data.txt) ]
* BiRD [[Download](https://storage.googleapis.com/phrase-bert/bird/data.txt)]
* PPDB [[Download](https://storage.googleapis.com/phrase-bert/ppdb/examples.json)]
* PPDB-filtered [[Download](https://storage.googleapis.com/phrase-bert/ppdb_exact/examples.json)]
* PAWS-short [[Download Train-split](https://storage.googleapis.com/phrase-bert/paws_short/train_examples.json) ] [[Download Dev-split](https://storage.googleapis.com/phrase-bert/paws_short/dev_examples.json) ] [[Download Test-split](https://storage.googleapis.com/phrase-bert/paws_short/test_examples.json) ]
Change `config/model_path.py` with the model path according to your directories and
* For evaluation on Turney, run `python eval_turney.py`
* For evaluation on BiRD, run `python eval_bird.py`
* for evaluation on PPDB / PPDB-filtered / PAWS-short, run `eval_ppdb_paws.py` with:
````
nohup python -u eval_ppdb_paws.py \
--full_run_mode \
--task <task-name> \
--data_dir <input-data-dir> \
--result_dir <result-storage-dr> \
>./output.txt 2>&1 &
````
## Train your own Phrase-BERT
If you would like to go beyond using the pre-trained Phrase-BERT model, you may train your own Phrase-BERT using data from the domain you are interested in. Please refer to
`phrase-bert/phrase_bert_finetune.py`
The datasets we used to fine-tune Phrase-BERT are here: [training data csv file](https://storage.googleapis.com/phrase-bert/phrase-bert-ft-data/pooled_context_para_triples_p%3D0.8_train.csv) and [validation data csv file](https://storage.googleapis.com/phrase-bert/phrase-bert-ft-data/pooled_context_para_triples_p%3D0.8_valid.csv).
To re-produce the trained Phrase-BERT, please run:
export INPUT_DATA_PATH=<directory-of-phrasebert-finetuning-data>
export TRAIN_DATA_FILE=<training-data-filename.csv>
export VALID_DATA_FILE=<validation-data-filename.csv>
export INPUT_MODEL_PATH=bert-base-nli-stsb-mean-tokens
export OUTPUT_MODEL_PATH=<directory-of-saved-model>
python -u phrase_bert_finetune.py \
--input_data_path $INPUT_DATA_PATH \
--train_data_file $TRAIN_DATA_FILE \
--valid_data_file $VALID_DATA_FILE \
--input_model_path $INPUT_MODEL_PATH \
--output_model_path $OUTPUT_MODEL_PATH
## Citation:
Please cite us if you find this useful:
````
@inproceedings{phrasebertwang2021,
author={Shufan Wang and Laure Thompson and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2021",
Title={Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration}
}
````
|
michaelrglass/albert-base-rci-wikisql-col | d51bdace09428c72213107d0fe12709c1d7d5d2f | 2021-06-16T15:58:03.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | michaelrglass | null | michaelrglass/albert-base-rci-wikisql-col | 2,143 | null | transformers | 1,307 | Entry not found |
jjzha/jobspanbert-base-cased | 0565591b92a2f8da7094dbf05c3ec6e2b93c0987 | 2022-07-26T08:15:15.000Z | [
"pytorch",
"bert",
"en",
"transformers",
"continuous pretraining",
"job postings",
"JobSpanBERT"
] | null | false | jjzha | null | jjzha/jobspanbert-base-cased | 2,140 | null | transformers | 1,308 | ---
language:
- en
tags:
- continuous pretraining
- job postings
- JobSpanBERT
---
# JobSpanBERT
This is the JobSpanBERT model from:
Mike Zhang, Kristian Nørgaard Jensen, Sif Dam Sonniks, and Barbara Plank. __SkillSpan: Hard and Soft Skill Extraction from Job Postings__. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
This model is continuously pre-trained from a spanbert-base-cased checkpoint (which can also be found in our repository) on ~3.2M sentences from job postings. More information can be found in the paper.
If you use this model, please cite the following paper:
```
@inproceedings{zhang-etal-2022-skillspan,
title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings",
author = "Zhang, Mike and
Jensen, Kristian N{\o}rgaard and
Sonniks, Sif and
Plank, Barbara",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.366",
pages = "4962--4984",
abstract = "Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.",
}
``` |
Laggrif/DialoGPT-medium-Luke | 36fcfc7f3d7209dcb7a349804a7a6f5dab2ddd94 | 2022-06-21T17:50:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Laggrif | null | Laggrif/DialoGPT-medium-Luke | 2,138 | null | transformers | 1,309 | ---
tags:
- conversational
---
# Luke DialoGPT Model |
GroNLP/gpt2-small-italian | 9a5b0043f33d9adacd23d53e3a8e9c70f71febc9 | 2021-05-21T09:58:53.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"it",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small"
] | text-generation | false | GroNLP | null | GroNLP/gpt2-small-italian | 2,136 | null | transformers | 1,310 | ---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
hf-internal-testing/tiny-random-unispeech | a90315fed34d4891a62a539637210f2bdd30e68f | 2022-01-26T13:56:35.000Z | [
"pytorch",
"unispeech",
"audio-classification",
"transformers"
] | audio-classification | false | hf-internal-testing | null | hf-internal-testing/tiny-random-unispeech | 2,136 | null | transformers | 1,311 | Entry not found |
mrsinghania/asr-question-detection | 90b29f15265e6819044d484039b1ae9ca683342d | 2021-09-21T06:44:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mrsinghania | null | mrsinghania/asr-question-detection | 2,136 | 2 | transformers | 1,312 | <i>Question vs Statement classifier</i> trained on more than 7k samples which were coming from spoken data in an interview setting
<b>Code for using in Transformers:</b>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mrsinghania/asr-question-detection")
model = AutoModelForSequenceClassification.from_pretrained("mrsinghania/asr-question-detection") |
ktrapeznikov/biobert_v1.1_pubmed_squad_v2 | 351a8218e59777dcb0a1b454ead77a0c39014bc5 | 2021-05-19T21:10:03.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ktrapeznikov | null | ktrapeznikov/biobert_v1.1_pubmed_squad_v2 | 2,135 | 1 | transformers | 1,313 | ### Model
**[`monologg/biobert_v1.1_pubmed`](https://huggingface.co/monologg/biobert_v1.1_pubmed)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
This model is cased.
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=monologg/biobert_v1.1_pubmed
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.97068980038743 |
| f1 | 79.37043950121722 |
| total | 11873.0 |
| HasAns_exact | 74.13967611336032 |
| HasAns_f1 | 80.94892513460755 |
| HasAns_total | 5928.0 |
| NoAns_exact | 77.79646761984861 |
| NoAns_f1 | 77.79646761984861 |
| NoAns_total | 5945.0 |
| best_exact | 75.97068980038743 |
| best_exact_thresh | 0.0 |
| best_f1 | 79.37043950121729 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
aware-ai/wav2vec2-xls-r-1b-5gram-german | 4bfed40b06b3286744027db0cf211efdfb1c7aa6 | 2022-06-01T13:33:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | aware-ai | null | aware-ai/wav2vec2-xls-r-1b-5gram-german | 2,127 | 1 | transformers | 1,314 | ---
language: de
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-1b-5gram-german with LM by Florian Zimmermeister @A\\Ware
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 4.382541642219636
- name: Test CER
type: cer
value: 1.6235493024026488
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8 de
type: mozilla-foundation/common_voice_8_0
args: de
metrics:
- name: Test WER
type: wer
value: 4.382541642219636
- name: Test CER
type: cer
value: 1.6235493024026488
---
## Evaluation
The model can be evaluated as follows on the German test data of Common Voice.
```python
import torch
from transformers import AutoModelForCTC, AutoProcessor
from unidecode import unidecode
import re
from datasets import load_dataset, load_metric
import datasets
counter = 0
wer_counter = 0
cer_counter = 0
device = "cuda" if torch.cuda.is_available() else "cpu"
special_chars = [["Ä"," AE "], ["Ö"," OE "], ["Ü"," UE "], ["ä"," ae "], ["ö"," oe "], ["ü"," ue "]]
def clean_text(sentence):
for special in special_chars:
sentence = sentence.replace(special[0], special[1])
sentence = unidecode(sentence)
for special in special_chars:
sentence = sentence.replace(special[1], special[0])
sentence = re.sub("[^a-zA-Z0-9öäüÖÄÜ ,.!?]", " ", sentence)
return sentence
def main(model_id):
print("load model")
model = AutoModelForCTC.from_pretrained(model_id).to(device)
print("load processor")
processor = AutoProcessor.from_pretrained(processor_id)
print("load metrics")
wer = load_metric("wer")
cer = load_metric("cer")
ds = load_dataset("mozilla-foundation/common_voice_8_0","de")
ds = ds["test"]
ds = ds.cast_column(
"audio", datasets.features.Audio(sampling_rate=16_000)
)
def calculate_metrics(batch):
global counter, wer_counter, cer_counter
resampled_audio = batch["audio"]["array"]
input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values.to(device)).logits.cpu().numpy()[0]
decoded = processor.decode(logits)
pred = decoded.text.lower()
ref = clean_text(batch["sentence"]).lower()
wer_result = wer.compute(predictions=[pred], references=[ref])
cer_result = cer.compute(predictions=[pred], references=[ref])
counter += 1
wer_counter += wer_result
cer_counter += cer_result
if counter % 100 == True:
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
return batch
ds.map(calculate_metrics, remove_columns=ds.column_names)
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
model_id = "flozi00/wav2vec2-xls-r-1b-5gram-german"
main(model_id)
``` |
readerbench/RoBERT-large | 2677d2cc3bc009380161e71eda03515abfb5feb4 | 2021-05-20T04:07:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"ro",
"transformers"
] | null | false | readerbench | null | readerbench/RoBERT-large | 2,125 | null | transformers | 1,315 | Model card for RoBERT-large
---
language:
- ro
---
# RoBERT-large
## Pretrained BERT model for Romanian
Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: RoBERT-small, RoBERT-base and **RoBERT-large**, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| RoBERT-small | 19M | 12 | 256 | 8 | 0.5363 | 0.9687 |
| RoBERT-base | 114M | 12 | 768 | 12 | 0.6511 | 0.9802 |
| *RoBERT-large* | *341M* | *24* | *1024* | *24* | *0.6929* | *0.9843* |
All models are available:
* [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small)
* [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base)
* [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-large")
model = TFAutoModel.from_pretrained("readerbench/RoBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-large")
model = AutoModel.from_pretrained("readerbench/RoBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Training data
The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process.
| Corpus | Words | Sentences | Size (GB)|
|-----------|:---------:|:---------:|:--------:|
| Oscar | 1.78B | 87M | 10.8 |
| RoTex | 240M | 14M | 1.5 |
| RoWiki | 50M | 2M | 0.3 |
| **Total** | **2.07B** | **103M** | **12.6** |
## Downstream performance
### Sentiment analysis
We report Macro-averaged F1 score (in %)
| Model | Dev | Test |
|------------------|:--------:|:--------:|
| multilingual-BERT| 68.96 | 69.57 |
| XLM-R-base | 71.26 | 71.71 |
| BERT-base-ro | 70.49 | 71.02 |
| RoBERT-small | 66.32 | 66.37 |
| RoBERT-base | 70.89 | 71.61 |
| *RoBERT-large* | **72.48**| **72.11**|
### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification
We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %).
| Model | Dialect Classification | MD to RO | RO to MD |
|-------------------|:----------------------:|:--------:|:--------:|
| 2-CNN + SVM | 93.40 | 65.09 | 75.21 |
| Char+Word SVM | 96.20 | 69.08 | 81.93 |
| BiGRU | 93.30 | **70.10**| 80.30 |
| multilingual-BERT | 95.34 | 68.76 | 78.24 |
| XLM-R-base | 96.28 | 69.93 | 82.28 |
| BERT-base-ro | 96.20 | 69.93 | 78.79 |
| RoBERT-small | 95.67 | 69.01 | 80.40 |
| RoBERT-base | 97.39 | 68.30 | 81.09 |
| *RoBERT-large* | **97.78** | *69.91* | **83.65**|
### Diacritics Restoration
Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %.
| Model | word level | char level |
|-----------------------------|:----------:|:----------:|
| BiLSTM | 99.42 | - |
| CharCNN | 98.40 | 99.65 |
| CharCNN + multilingual-BERT | 99.72 | 99.94 |
| CharCNN + XLM-R-base | 99.76 | **99.95** |
| CharCNN + BERT-base-ro | **99.79** | **99.95** |
| CharCNN + RoBERT-small | 99.73 | 99.94 |
| CharCNN + RoBERT-base | 99.78 | **99.95** |
| *CharCNN + RoBERT-large* | *99.76* | **99.95** |
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2020robert,
title={RoBERT--A Romanian BERT Model},
author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6626--6637},
year={2020}
}
```
|
IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment | 21ff55d0bd2f7904d2a5380165ac2fd6d0d74b81 | 2022-05-27T07:59:44.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"NLU",
"Sentiment",
"Chinese",
"license:apache-2.0"
] | text-classification | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment | 2,124 | 4 | transformers | 1,316 | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
- Chinese
inference: true
widget:
- text: "今天心情不好"
---
# Erlangshen-Roberta-110M-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 8 sentiment datasets in the Chinese domain for finetune, with a total of 227347 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext)
## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment')
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment')
text='今天心情不好'
output=model(torch.tensor([tokenizer.encode(text)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## Scores on downstream chinese tasks
| Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 |
| Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 |
| Erlangshen-MegatronBert-1.3B-Sentiment | 98.1 | 97.8 | 97 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
facebook/contriever-msmarco | abe8c1493371369031bcb1e02acb754cf4e162fa | 2022-06-25T17:19:59.000Z | [
"pytorch",
"bert",
"arxiv:2112.09118",
"transformers",
"feature-extraction"
] | feature-extraction | false | facebook | null | facebook/contriever-msmarco | 2,121 | null | transformers | 1,317 | ---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the finetuned version of the pre-trained contriever model available here https://huggingface.co/facebook/contriever, following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever.
## Usage (HuggingFace Transformers)
Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding.
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('facebook/contriever-msmarco')
model = AutoModel.from_pretrained('facebook/contriever-msmarco')
sentences = [
"Where was Marie Curie born?",
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
outputs = model(**inputs)
# Mean pooling
def mean_pooling(token_embeddings, mask):
token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.)
sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None]
return sentence_embeddings
embeddings = mean_pooling(outputs[0], inputs['attention_mask'])
``` |
textattack/distilbert-base-cased-CoLA | 73fd8dc841293aab1caea98581bb57481c87ff55 | 2020-06-09T16:45:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-cased-CoLA | 2,120 | null | transformers | 1,318 | Entry not found |
SEBIS/legal_t5_small_trans_fr_en | 2940039b5f8da8f9a6f3c09be0c9667be9d7a9a9 | 2021-06-23T09:52:57.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"French English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation French English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_fr_en | 2,119 | null | transformers | 1,319 |
---
language: French English
tags:
- translation French English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "quels montants ont été attribués et quelles sommes ont été effectivement utilisées dans chaque État membre? 4."
---
# legal_t5_small_trans_fr_en model
Model on translating legal text from French to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_fr_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from French to English.
### How to use
Here is how to use this model to translate legal text from French to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_fr_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_fr_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
fr_text = "quels montants ont été attribués et quelles sommes ont été effectivement utilisées dans chaque État membre? 4."
pipeline([fr_text], max_length=512)
```
## Training data
The legal_t5_small_trans_fr_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_fr_en | 51.44|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
dbmdz/convbert-base-turkish-cased | 6d9b09e4e6f249c477aac7b73f3bcf9aa78ed1a8 | 2021-03-15T23:29:04.000Z | [
"pytorch",
"tf",
"convbert",
"feature-extraction",
"tr",
"arxiv:2008.02496",
"transformers",
"license:mit"
] | feature-extraction | false | dbmdz | null | dbmdz/convbert-base-turkish-cased | 2,119 | null | transformers | 1,320 | ---
language: tr
license: mit
---
# 🤗 + 📚 dbmdz Turkish ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ConvBERT model for Turkish 🎉
# 🇹🇷 ConvBERTurk
ConvBERTurk is a community-driven cased ConvBERT model for Turkish.
In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented
in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128
sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-32!
## Usage
With Transformers >= 4.3 our cased ConvBERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/convbert-base-turkish-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
## Results
For results on PoS tagging, NER and Question Answering downstream tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our DBMDZ BERT models in general, just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
valurank/distilroberta-bias | c1e4a2773522c3acc929a7b2c9af2b7e4137b96d | 2022-06-08T20:44:39.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:valurank/wikirev-bias",
"transformers",
"license:other"
] | text-classification | false | valurank | null | valurank/distilroberta-bias | 2,114 | null | transformers | 1,321 | ---
license: other
language: en
datasets:
- valurank/wikirev-bias
---
# DistilROBERTA fine-tuned for bias detection
This model is based on [distilroberta-base](https://huggingface.co/distilroberta-base) pretrained weights, with a classification head fine-tuned to classify text into 2 categories (neutral, biased).
## Training data
The dataset used to fine-tune the model is [wikirev-bias](https://huggingface.co/datasets/valurank/wikirev-bias), extracted from English wikipedia revisions, see https://github.com/rpryzant/neutralizing-bias for details on the WNC wiki edits corpus.
## Inputs
Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
|
fhswf/bert_de_ner | 97b17ba2e2bfe2e9d1b8d6e348cb60e0e82fc0b4 | 2021-05-19T16:49:54.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"de",
"dataset:germeval_14",
"transformers",
"German",
"NER",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | fhswf | null | fhswf/bert_de_ner | 2,113 | 2 | transformers | 1,322 | ---
language: de
license: cc-by-sa-4.0
datasets:
- germeval_14
tags:
- German
- de
- NER
---
# BERT-DE-NER
## What is it?
This is a German BERT model fine-tuned for named entity recognition.
## Base model & training
This model is based on [bert-base-german-dbmdz-cased](https://huggingface.co/bert-base-german-dbmdz-cased) and has been fine-tuned
for NER on the training data from [GermEval2014](https://sites.google.com/site/germeval2014ner).
## Model results
The results on the test data from GermEval2014 are (entities only):
| Precision | Recall | F1-Score |
|----------:|-------:|---------:|
| 0.817 | 0.842 | 0.829 |
## How to use
```Python
>>> from transformers import pipeline
>>> classifier = pipeline('ner', model="fhswf/bert_de_ner")
>>> classifier('Von der Organisation „medico international“ hieß es, die EU entziehe sich seit vielen Jahren der Verantwortung für die Menschen an ihren Außengrenzen.')
[{'word': 'med', 'score': 0.9996621608734131, 'entity': 'B-ORG', 'index': 6},
{'word': '##ico', 'score': 0.9995362162590027, 'entity': 'I-ORG', 'index': 7},
{'word': 'international',
'score': 0.9996932744979858,
'entity': 'I-ORG',
'index': 8},
{'word': 'eu', 'score': 0.9997008442878723, 'entity': 'B-ORG', 'index': 14}]
```
|
ixa-ehu/SciBERT-SQuAD-QuAC | df352e10c506e443875447c166a679b6a5ee34e9 | 2021-06-29T22:55:53.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"arxiv:1808.07036",
"transformers",
"autotrain_compatible"
] | question-answering | false | ixa-ehu | null | ixa-ehu/SciBERT-SQuAD-QuAC | 2,110 | null | transformers | 1,323 | ---
language: en
---
# SciBERT-SQuAD-QuAC
This is the [SciBERT language representation model](https://huggingface.co/allenai/scibert_scivocab_uncased) fine tuned for Question Answering. SciBERT is a pre-trained language model based on BERT that has been trained on a large corpus of scientific text. When fine tuning for Question Answering we combined [SQuAD2.0](https://www.aclweb.org/anthology/P18-2124/) and [QuAC](https://arxiv.org/abs/1808.07036) datasets.
If using this model, please cite the following paper:
```
@inproceedings{otegi-etal-2020-automatic,
title = "Automatic Evaluation vs. User Preference in Neural Textual {Q}uestion{A}nswering over {COVID}-19 Scientific Literature",
author = "Otegi, Arantxa and
Campos, Jon Ander and
Azkune, Gorka and
Soroa, Aitor and
Agirre, Eneko",
booktitle = "Proceedings of the 1st Workshop on {NLP} for {COVID}-19 (Part 2) at {EMNLP} 2020",
month = dec,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlpcovid19-2.15",
doi = "10.18653/v1/2020.nlpcovid19-2.15",
}
```
|
hf-internal-testing/tiny-random-blenderbot | 9432cd260adf10352afc43e7080b154ca0313105 | 2021-09-17T19:25:13.000Z | [
"pytorch",
"tf",
"blenderbot",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-blenderbot | 2,106 | null | transformers | 1,324 | Entry not found |
JamesStratford/PLord-bot-DialoGPT-medium | a1f8100aa348ae0b41363d6089d81529e0ac3484 | 2022-07-08T01:37:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JamesStratford | null | JamesStratford/PLord-bot-DialoGPT-medium | 2,105 | null | transformers | 1,325 | ---
tags:
- conversational
---
# PlordBot - medium |
ktrapeznikov/gpt2-medium-topic-news | d079f5fb6ab7eaf5a38dc2a72bd708a60879d23c | 2021-05-23T06:18:56.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | ktrapeznikov | null | ktrapeznikov/gpt2-medium-topic-news | 2,101 | 1 | transformers | 1,326 | ---
language:
- en
thumbnail:
widget:
- text: "topic: climate article:"
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a large news corpus conditioned on a topic
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, prompt model with:
`topic: climate article:`
The following tags were used during training:
`arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian`
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic: {topic} article:", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(list(out.cpu()[0])))
```
## Training data
## Training procedure
|
Qishuai/distilbert_punctuator_en | 3b5050a2775440ef59b76082ad75eb9574973ad3 | 2021-12-13T14:47:49.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Qishuai | null | Qishuai/distilbert_punctuator_en | 2,098 | 5 | transformers | 1,327 | # Punctuator for Uncased English
The model is fine-tuned based on `DistilBertForTokenClassification` for adding punctuations to plain text (uncased English)
## Usage
```python
from transformers import DistilBertForTokenClassification, DistilBertTokenizerFast
model = DistilBertForTokenClassification.from_pretrained("Qishuai/distilbert_punctuator_en")
tokenizer = DistilBertTokenizerFast.from_pretrained("Qishuai/distilbert_punctuator_en")
```
## Model Overview
### Training data
Combination of following three dataset:
- BBC news: From BBC news website corresponding to stories in five topical areas from 2004-2005. [Reference](https://www.kaggle.com/hgultekin/bbcnewsarchive)
- News articles: 20000 samples of short news articles scraped from Hindu, Indian times and Guardian between Feb 2017 and Aug 2017 [Reference](https://www.kaggle.com/sunnysai12345/news-summary?select=news_summary_more.csv)
- Ted talks: transcripts of over 4,000 TED talks between 2004 and 2019 [Reference](https://www.kaggle.com/miguelcorraljr/ted-ultimate-dataset)
### Model Performance
- Validation with 500 samples of dataset scraped from https://www.thenews.com.pk website. [Reference](https://www.kaggle.com/asad1m9a9h6mood/news-articles)
- Metrics Report:
| | precision | recall | f1-score | support |
|:--------------:|:---------:|:------:|:--------:|:-------:|
| COMMA | 0.66 | 0.55 | 0.60 | 7064 |
| EXLAMATIONMARK | 1.00 | 0.00 | 0.00 | 5 |
| PERIOD | 0.73 | 0.63 | 0.68 | 6573 |
| QUESTIONMARK | 0.54 | 0.41 | 0.47 | 17 |
| micro avg | 0.69 | 0.59 | 0.64 | 13659 |
| macro avg | 0.73 | 0.40 | 0.44 | 13659 |
| weighted avg | 0.69 | 0.59 | 0.64 | 13659 |
- Validation with 86 news ted talks of 2020 which are not included in training dataset [Reference](https://www.kaggle.com/thegupta/ted-talk)
- Metrics Report:
| | precision | recall | f1-score | support |
|:--------------:|:---------:|:------:|:--------:|:-------:|
| COMMA | 0.71 | 0.56 | 0.63 | 10712 |
| EXLAMATIONMARK | 0.45 | 0.07 | 0.12 | 75 |
| PERIOD | 0.75 | 0.65 | 0.70 | 7921 |
| QUESTIONMARK | 0.73 | 0.67 | 0.70 | 827 |
| micro avg | 0.73 | 0.60 | 0.66 | 19535 |
| macro avg | 0.66 | 0.49 | 0.53 | 19535 |
| weighted avg | 0.73 | 0.60 | 0.66 | 19535 |
|
johngiorgi/declutr-base | 3a644f1c78aae97f6e7ed0e2463bcbbaef2e7383 | 2022-03-11T14:47:09.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:2006.03659",
"transformers",
"autotrain_compatible"
] | fill-mask | false | johngiorgi | null | johngiorgi/declutr-base | 2,095 | 1 | transformers | 1,328 | # DeCLUTR-base
## Model description
The "DeCLUTR-base" model from our paper: [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers).
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-base")
# Prepare some text to embed
texts = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-base")
model = AutoModel.from_pretrained("johngiorgi/declutr-base")
# Prepare some text to embed
text = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@article{Giorgi2020DeCLUTRDC,
title={DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations},
author={John M Giorgi and Osvald Nitski and Gary D. Bader and Bo Wang},
journal={ArXiv},
year={2020},
volume={abs/2006.03659}
}
``` |
LeBenchmark/wav2vec2-FR-7K-large | 970d57910b508c27e9cafd52b781fee76cebfc8b | 2021-11-23T17:54:37.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | LeBenchmark | null | LeBenchmark/wav2vec2-FR-7K-large | 2,092 | 3 | transformers | 1,329 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@article{Evain2021LeBenchmarkAR,
title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech},
author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier},
journal={ArXiv},
year={2021},
volume={abs/2104.11462}
}
```
|
hustvl/yolos-small | 5f960fd774250e41a01086ccbbf5e44d9d603c14 | 2022-06-27T08:37:45.000Z | [
"pytorch",
"yolos",
"object-detection",
"dataset:coco",
"arxiv:2106.00666",
"transformers",
"vision",
"license:apache-2.0"
] | object-detection | false | hustvl | null | hustvl/yolos-small | 2,089 | 10 | transformers | 1,330 | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (small-sized) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 200 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **36.1** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
hyunwoongko/kobart | b5a881942b2536ed7851752a77d7da36d58f2e49 | 2022-04-11T01:19:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ko",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | hyunwoongko | null | hyunwoongko/kobart | 2,083 | 1 | transformers | 1,331 | ---
language: ko
tags:
- bart
license: mit
---
## KoBART-base-v2
With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.
```python
from transformers import PreTrainedTokenizerFast, BartModel
tokenizer = PreTrainedTokenizerFast.from_pretrained('hyunwoongko/kobart')
model = BartModel.from_pretrained('hyunwoongko/kobart')
```
### Performance
NSMC
- acc. : 0.901
### hyunwoongko/kobart
- Added bos/eos post processor
- Removed token_type_ids
|
richielleisart/Childe | 5da9de8d7f9e9cfde2c126b6ac6531b3ddff606a | 2022-01-19T18:52:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | richielleisart | null | richielleisart/Childe | 2,080 | null | transformers | 1,332 | ---
tags:
- conversational
---
# Childe Chatbot Model |
nvidia/mit-b5 | 9707ed6bec8a37b67fc9b6d03fe6fbb0e8020f76 | 2022-07-29T13:15:56.000Z | [
"pytorch",
"tf",
"segformer",
"image-classification",
"dataset:imagenet_1k",
"arxiv:2105.15203",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | nvidia | null | nvidia/mit-b5 | 2,076 | null | transformers | 1,333 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet_1k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# SegFormer (b5-sized) encoder pre-trained-only
SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes.
## Intended uses & limitations
You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b5")
model = SegformerForImageClassification.from_pretrained("nvidia/mit-b5")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
microsoft/deberta-v2-xlarge-mnli | 5272422ce68b8d61766079390b96b033a64414d2 | 2021-05-21T20:08:15.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta",
"deberta-mnli",
"license:mit"
] | text-classification | false | microsoft | null | microsoft/deberta-v2-xlarge-mnli | 2,075 | 2 | transformers | 1,334 | ---
language: en
tags:
- deberta
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa V2 xlarge model fine-tuned with MNLI task, 24 layers, 1536 hidden size. Total parameters 900M.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
hf-internal-testing/tiny-random-longformer | 5690941b3c077e091b13b5f992b42e2ead18b35d | 2021-09-17T19:24:34.000Z | [
"pytorch",
"tf",
"longformer",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-longformer | 2,070 | 1 | transformers | 1,335 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | 6ef16021f303c8a2bac02fd5af16601593e665d2 | 2021-10-17T12:09:14.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | 2,064 | 2 | transformers | 1,336 | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT Mix SA Model
## Model description
**CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT Mix SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A | dce2aa07b3e1e5c91c2f411c5534b399462f7b16 | 2021-07-25T21:33:06.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"arxiv:2102.07033",
"arxiv:2104.08727",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A | 2,064 | 1 | sentence-transformers | 1,337 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-QA_v1-mpnet-asymmetric-A
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used two separate pretrained [mpnet-base](https://huggingface.co/microsoft/mpnet-base) models and trained them using contrastive learning objective. Question and answer pairs from StackExchange and other datasets were used as training data to make the model robust to Question / Answer embedding similarity.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
This model set is intended to be used as a sentence encoder for a search engine. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
Two models should be used on conjunction for Semantic Search purposes.
1. [multi-QA_v1-mpnet-asymmetric-Q](https://huggingface.co/flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q) - Model to encode Questions
1. [multi-QA_v1-mpnet-asymmetric-A](https://huggingface.co/flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A) - Model to encode Answers
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model_Q = SentenceTransformer('flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-Q')
model_A = SentenceTransformer('flax-sentence-embeddings/multi-QA_v1-mpnet-asymmetric-A')
question = "Replace me by any question you'd like."
question_embbedding = model_Q.encode(text)
answer = "Replace me by any answer you'd like."
answer_embbedding = model_A.encode(text)
answer_likeliness = cosine_similarity(question_embedding, answer_embedding)
```
# Training procedure
## Pre-training
We use the pretrained [`Mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | |
transfaeries/DialoGPT-medium-Discord-1.0 | fdd1f5fa445bd30233a0a2d854d89741fad3fa80 | 2021-09-02T04:19:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | transfaeries | null | transfaeries/DialoGPT-medium-Discord-1.0 | 2,062 | null | transformers | 1,338 | ---
tags:
- conversational
---
# Discord Model Medium 7 epochs |
hf-internal-testing/tiny-random-t5-v1.1 | 95197e7dc6c034b9ae97b124952afb5e15ed0fb2 | 2021-11-02T21:08:45.000Z | [
"pytorch",
"tf",
"t5",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-t5-v1.1 | 2,060 | null | transformers | 1,339 | Entry not found |
M-CLIP/M-BERT-Distil-40 | ff20c09c1a088589cb65a169d165b5ddcbe792ca | 2021-03-21T15:39:15.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | M-CLIP | null | M-CLIP/M-BERT-Distil-40 | 2,056 | 1 | transformers | 1,340 | <br />
<p align="center">
<h1 align="center">M-BERT Distil 40</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('M-BERT-Distil-40')
embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [distilbert-base-multilingual](https://huggingface.co/distilbert-base-multilingual-cased) tuned to match the embedding space for [40 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Distil%2040/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 40 languages used during fine-tuning can be found in [SupportedLanguages.md](Fine-Tune-Languages.md).
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language.
All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages.
## Evaluation
[These results can be viewed at Github](https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040). <br>
A non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br>
When testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random
|
voidful/albert_chinese_base | 549e8a023d81bd68e70cf3e2b4aa621e145695ed | 2021-08-03T05:02:21.000Z | [
"pytorch",
"albert",
"fill-mask",
"zh",
"transformers",
"autotrain_compatible"
] | fill-mask | false | voidful | null | voidful/albert_chinese_base | 2,054 | 4 | transformers | 1,341 | ---
language: zh
pipeline_tag: fill-mask
widget:
- text: "今天[MASK]情很好"
---
# albert_chinese_base
This a albert_chinese_base model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Notice
*Support AutoTokenizer*
Since sentencepiece is not used in albert_chinese_base model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_base 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
```python
from transformers import AutoTokenizer, AlbertForMaskedLM
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_base'
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos],dim=-1).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token, logit_prob[predicted_index])
```
Result: `感 0.36333346366882324`
|
M-CLIP/XLM-Roberta-Large-Vit-B-32 | cfb9f55a6aad08a948167a8360fc11bce171d941 | 2022-06-02T23:23:21.000Z | [
"pytorch",
"tf",
"M-CLIP",
"multilingual",
"transformers"
] | null | false | M-CLIP | null | M-CLIP/XLM-Roberta-Large-Vit-B-32 | 2,052 | null | transformers | 1,342 | ---
language: multilingual
---
## Multilingual-clip: XLM-Roberta-Large-Vit-B-32
Multilingual-CLIP extends OpenAI's English text encoders to multiple other languages. This model *only* contains the multilingual text encoder. The corresponding image model `ViT-B-32` can be retrieved via instructions found on OpenAI's [CLIP repository on Github](https://github.com/openai/CLIP). We provide a usage example below.
## Requirements
To use both the multilingual text encoder and corresponding image encoder, we need to install the packages [`multilingual-clip`](https://github.com/FreddeFrallan/Multilingual-CLIP) and [`clip`](https://github.com/openai/CLIP).
```
pip install multilingual-clip
pip install git+https://github.com/openai/CLIP.git
```
## Usage
Extracting embeddings from the text encoder can be done in the following way:
```python
from multilingual_clip import pt_multilingual_clip
import transformers
texts = [
'Three blind horses listening to Mozart.',
'Älgen är skogens konung!',
'Wie leben Eisbären in der Antarktis?',
'Вы знали, что все белые медведи левши?'
]
model_name = 'M-CLIP/XLM-Roberta-Large-Vit-B-32'
# Load Model & Tokenizer
model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
embeddings = model.forward(texts, tokenizer)
print("Text features shape:", embeddings.shape)
```
Extracting embeddings from the corresponding image encoder:
```python
import torch
import clip
import requests
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image = preprocess(image).unsqueeze(0).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
print("Image features shape:", image_features.shape)
```
## Evaluation results
None of the M-CLIP models have been extensivly evaluated, but testing them on Txt2Img retrieval on the humanly translated MS-COCO dataset, we see the following **R@10** results:
| Name | En | De | Es | Fr | Zh | It | Pl | Ko | Ru | Tr | Jp |
| ----------------------------------|:-----: |:-----: |:-----: |:-----: | :-----: |:-----: |:-----: |:-----: |:-----: |:-----: |:-----: |
| [OpenAI CLIP Vit-B/32](https://github.com/openai/CLIP)| 90.3 | - | - | - | - | - | - | - | - | - | - |
| [OpenAI CLIP Vit-L/14](https://github.com/openai/CLIP)| 91.8 | - | - | - | - | - | - | - | - | - | - |
| [OpenCLIP ViT-B-16+-](https://github.com/openai/CLIP)| 94.3 | - | - | - | - | - | - | - | - | - | - |
| [LABSE Vit-L/14](https://huggingface.co/M-CLIP/LABSE-Vit-L-14)| 91.6 | 89.6 | 89.5 | 89.9 | 88.9 | 90.1 | 89.8 | 80.8 | 85.5 | 89.8 | 73.9 |
| [XLM-R Large Vit-B/32](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-32)| 91.8 | 88.7 | 89.1 | 89.4 | 89.3 | 89.8| 91.4 | 82.1 | 86.1 | 88.8 | 81.0 |
| [XLM-R Vit-L/14](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14)| 92.4 | 90.6 | 91.0 | 90.0 | 89.7 | 91.1 | 91.3 | 85.2 | 85.8 | 90.3 | 81.9 |
| [XLM-R Large Vit-B/16+](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-16Plus)| **95.0** | **93.0** | **93.6** | **93.1** | **94.0** | **93.1** | **94.4** | **89.0** | **90.0** | **93.0** | **84.2** |
## Training/Model details
Further details about the model training and data can be found in the [model card](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/larger_mclip.md). |
textattack/roberta-base-MNLI | 6f2e633322381bc5897405e417ec531ea3633a3f | 2021-05-20T22:06:43.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/roberta-base-MNLI | 2,037 | 1 | transformers | 1,343 | Entry not found |
philschmid/MiniLM-L6-H384-uncased-sst2 | 0c0ecdc39368f87291727ec084111e89e30b45b2 | 2021-09-24T09:53:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | philschmid | null | philschmid/MiniLM-L6-H384-uncased-sst2 | 2,034 | null | transformers | 1,344 | Entry not found |
lidiya/bart-base-samsum | eeb19117db15f1388c7188cb455e7a98af647792 | 2022-07-20T14:56:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:samsum",
"transformers",
"seq2seq",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | lidiya | null | lidiya/bart-base-samsum | 2,030 | 1 | transformers | 1,345 | ---
language: en
tags:
- bart
- seq2seq
- summarization
license: apache-2.0
datasets:
- samsum
widget:
- text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\
Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\
\ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\
\ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\n"
model-index:
- name: bart-base-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization'
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 46.6619
- name: Validation ROUGE-2
type: rouge-2
value: 23.3285
- name: Validation ROUGE-L
type: rouge-l
value: 39.4811
- name: Test ROUGE-1
type: rouge-1
value: 44.9932
- name: Test ROUGE-2
type: rouge-2
value: 21.7286
- name: Test ROUGE-L
type: rouge-l
value: 38.1921
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 45.0148
verified: true
- name: ROUGE-2
type: rouge
value: 21.6861
verified: true
- name: ROUGE-L
type: rouge
value: 38.1728
verified: true
- name: ROUGE-LSUM
type: rouge
value: 41.2794
verified: true
- name: loss
type: loss
value: 1.597476601600647
verified: true
- name: gen_len
type: gen_len
value: 17.6606
verified: true
---
## `bart-base-samsum`
This model was obtained by fine-tuning `facebook/bart-base` on Samsum dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="lidiya/bart-base-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
summarizer(conversation)
```
## Training procedure
- Colab notebook: https://colab.research.google.com/drive/1RInRjLLso9E2HG_xjA6j8JO3zXzSCBRF?usp=sharing
## Results
| key | value |
| --- | ----- |
| eval_rouge1 | 46.6619 |
| eval_rouge2 | 23.3285 |
| eval_rougeL | 39.4811 |
| eval_rougeLsum | 43.0482 |
| test_rouge1 | 44.9932 |
| test_rouge2 | 21.7286 |
| test_rougeL | 38.1921 |
| test_rougeLsum | 41.2672 |
|
akdeniz27/roberta-large-cuad | 32cd27aa93ae12e576f214c40c558bdcc5081220 | 2021-11-14T08:43:30.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"transformers",
"autotrain_compatible"
] | question-answering | false | akdeniz27 | null | akdeniz27/roberta-large-cuad | 2,024 | null | transformers | 1,346 | ---
language: en
datasets:
- cuad
---
# RoBERTa Large Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Large"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo |
deepset/tapas-large-nq-hn-reader | 3b9b9fcfd1789686d05a3b63d8492ac162c7d9fc | 2022-01-23T14:58:08.000Z | [
"pytorch",
"tapas",
"en",
"transformers",
"license:apache-2.0"
] | null | false | deepset | null | deepset/tapas-large-nq-hn-reader | 2,024 | null | transformers | 1,347 | ---
language: en
tags:
- tapas
license: apache-2.0
---
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models).
It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_.
This model has 2 versions which can be used differing only in the table scoring head.
The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits.
The other (non-default) version corredponds to the original checkpoint from the TaPas repository and can be accessed setting `revision="original"`.
# Usage
## In Haystack
If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/):
```python
from haystack.nodes import TableReader
table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-hn-reader")
```
|
facebook/wav2vec2-large-robust | 2493a2c576276145c3e066d9243b0e391fab673a | 2021-11-05T12:45:27.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"arxiv:2104.01027",
"transformers",
"speech",
"license:apache-2.0"
] | null | false | facebook | null | facebook/wav2vec2-large-robust | 2,021 | 9 | transformers | 1,348 | ---
language: en
datasets:
- libri_light
- common_voice
- switchboard
- fisher
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large-Robust
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
|
hf-internal-testing/tiny-random-deberta | 449491e17107f61f2e8df35a0e20a55e9c4afd3c | 2021-09-17T19:22:32.000Z | [
"pytorch",
"tf",
"deberta",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-deberta | 2,019 | null | transformers | 1,349 | Entry not found |
Helsinki-NLP/opus-mt-ur-en | c803d32b6f7a3a7a8cb1ba91d2947de0009f8cdc | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ur",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ur-en | 2,017 | 1 | transformers | 1,350 | ---
language:
- ur
- en
tags:
- translation
license: apache-2.0
---
### urd-eng
* source group: Urdu
* target group: English
* OPUS readme: [urd-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md)
* model: transformer-align
* source language(s): urd
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.urd.eng | 23.2 | 0.435 |
### System Info:
- hf_name: urd-eng
- source_languages: urd
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ur', 'en']
- src_constituents: {'urd'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt
- src_alpha3: urd
- tgt_alpha3: eng
- short_pair: ur-en
- chrF2_score: 0.435
- bleu: 23.2
- brevity_penalty: 0.975
- ref_len: 12029.0
- src_name: Urdu
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ur
- tgt_alpha2: en
- prefer_old: False
- long_pair: urd-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
aychang/roberta-base-imdb | cb6bcadd0540b61c9623bd6295d51ac445ceb135 | 2021-05-20T14:25:56.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:imdb",
"transformers",
"license:mit"
] | text-classification | false | aychang | null | aychang/roberta-base-imdb | 2,017 | 1 | transformers | 1,351 | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- imdb
metrics:
---
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/roberta-base-imdb"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/roberta-base-imdb"
texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB https://huggingface.co/datasets/imdb
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=800,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.94668,
'eval_f1': array([0.94603457, 0.94731017]),
'eval_loss': 0.2578844428062439,
'eval_precision': array([0.95762642, 0.93624502]),
'eval_recall': array([0.93472, 0.95864]),
'eval_runtime': 244.7522,
'eval_samples_per_second': 102.144}
```
|
stas/pegasus-cnn_dailymail-tiny-random | 600d1e9bb307c4c4c7361688317e80fc2612bc5c | 2021-07-01T05:33:00.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stas | null | stas/pegasus-cnn_dailymail-tiny-random | 2,015 | null | transformers | 1,352 | This is a tiny random pegasus-cnn_dailymail model used for testing.
See `make-pegasus-cnn_dailymail-tiny-random.py` for how it was created.
|
hf-internal-testing/tiny-random-big_bird | 0ab074a1d464a4cc6846332560f1f2abca400a71 | 2022-03-25T17:49:02.000Z | [
"pytorch",
"big_bird",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-big_bird | 2,010 | null | transformers | 1,353 | Entry not found |
stas/tiny-m2m_100 | 4df2a26e27b5f4823e2e797424de47f14c2e1b27 | 2022-04-29T23:57:25.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"transformers",
"testing",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | stas | null | stas/tiny-m2m_100 | 2,009 | null | transformers | 1,354 | ---
language:
- en
thumbnail:
tags:
- testing
license: apache-2.0
---
# Tiny M2M100 model
This is a tiny model that is used in the `transformers` test suite. It doesn't do anything useful beyond functional testing.
Do not try to use it for anything that requires quality.
The model is indeed 4MB in size.
You can see how it was created [here](https://huggingface.co/stas/tiny-m2m_100/blob/main/m2m-make-tiny-model.py)
If you're looking for the real model, please go to [https://huggingface.co/facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M).
|
hf-internal-testing/tiny-random-mpnet | 490e676cf9e1714ddd21f9169dc14652e9a9e7f4 | 2021-09-17T19:25:01.000Z | [
"pytorch",
"tf",
"mpnet",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-mpnet | 2,008 | null | transformers | 1,355 | Entry not found |
hf-internal-testing/tiny-electra | 7479c5defabc4a550d08c170f7f4fb0b0e6be19b | 2021-07-16T01:27:58.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hf-internal-testing | null | hf-internal-testing/tiny-electra | 2,005 | null | transformers | 1,356 | This is a tiny-electra random model to be used for basic testing.
|
hf-internal-testing/tiny-random-funnel | ec246a681806cada4b3c073569afba96f7ac8eb8 | 2021-09-17T19:25:04.000Z | [
"pytorch",
"tf",
"funnel",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-funnel | 2,002 | null | transformers | 1,357 | Entry not found |
Helsinki-NLP/opus-mt-en-sv | 13d9f7f708dd86e1edf61f0cd438298267b83850 | 2021-09-09T21:39:27.000Z | [
"pytorch",
"rust",
"marian",
"text2text-generation",
"en",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sv | 1,998 | 1 | transformers | 1,358 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sv
* source languages: en
* target languages: sv
* OPUS readme: [en-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sv | 60.1 | 0.736 |
|
hf-internal-testing/tiny-random-prophetnet | d5071e4655fd0413b0e71405a91dfb4280e31b81 | 2021-09-17T19:24:57.000Z | [
"pytorch",
"prophetnet",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-prophetnet | 1,998 | null | transformers | 1,359 | Entry not found |
hf-internal-testing/tiny-random-mobilebert | 1f919a6d77ef448d41e0de29f79f854ace43bc4c | 2021-09-17T19:24:24.000Z | [
"pytorch",
"tf",
"mobilebert",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-mobilebert | 1,996 | null | transformers | 1,360 | Entry not found |
hf-internal-testing/tiny-random-squeezebert | da3eaaeb3b2fa22836d34097046f192db387e961 | 2021-09-17T19:25:10.000Z | [
"pytorch",
"squeezebert",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-squeezebert | 1,995 | null | transformers | 1,361 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-en-tr | dc016b6b79636a066052e581101c734ca5934667 | 2022-06-01T13:01:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tr",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-tr | 1,995 | 1 | transformers | 1,362 | ---
language:
- en
- tr
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-tr
results:
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: flores101-devtest
type: flores_101
args: eng tur devtest
metrics:
- name: BLEU
type: bleu
value: 31.4
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newsdev2016
type: newsdev2016
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 21.9
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 42.3
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newstest2016
type: wmt-2016-news
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 23.4
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newstest2017
type: wmt-2017-news
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 25.4
- task:
name: Translation eng-tur
type: translation
args: eng-tur
dataset:
name: newstest2018
type: wmt-2018-news
args: eng-tur
metrics:
- name: BLEU
type: bleu
value: 22.6
---
# opus-mt-tc-big-en-tr
Neural machine translation model for translating from English (en) to Turkish (tr).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): tur
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-tur README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"I know Tom didn't want to eat that.",
"On Sundays, we would get up early and go fishing."
]
model_name = "pytorch-models/opus-mt-tc-big-en-tr"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Tom'un bunu yemek istemediğini biliyorum.
# Pazar günleri erkenden kalkıp balık tutmaya giderdik.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-tr")
print(pipe("I know Tom didn't want to eat that."))
# expected output: Tom'un bunu yemek istemediğini biliyorum.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-tur | tatoeba-test-v2021-08-07 | 0.68726 | 42.3 | 13907 | 84364 |
| eng-tur | flores101-devtest | 0.62829 | 31.4 | 1012 | 20253 |
| eng-tur | newsdev2016 | 0.58947 | 21.9 | 1001 | 15958 |
| eng-tur | newstest2016 | 0.57624 | 23.4 | 3000 | 50782 |
| eng-tur | newstest2017 | 0.58858 | 25.4 | 3007 | 51977 |
| eng-tur | newstest2018 | 0.57848 | 22.6 | 3000 | 53731 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:11:39 EEST 2022
* port machine: LM0-400-22516.local
|
hf-internal-testing/tiny-layoutlm | 7bc6366344bf3e7363a5e0e2f4fdd3087ab68e4a | 2021-08-04T04:33:04.000Z | [
"pytorch",
"layoutlm",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hf-internal-testing | null | hf-internal-testing/tiny-layoutlm | 1,992 | null | transformers | 1,363 | This is a tiny-layoutlm random model to be used for basic testing.
|
hf-internal-testing/tiny-random-gpt_neo | b95a8110971dfc560caa02c286c3b8aa0118941a | 2021-09-17T19:25:26.000Z | [
"pytorch",
"gpt_neo",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-gpt_neo | 1,992 | null | transformers | 1,364 | Entry not found |
hf-internal-testing/tiny-random-led | 2774e58f25d3fdda4c0d86b140cca8e049ee6a9f | 2021-09-17T19:24:21.000Z | [
"pytorch",
"tf",
"led",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-led | 1,992 | null | transformers | 1,365 | Entry not found |
hf-internal-testing/tiny-random-deberta-v2 | 924b47948998e199d88e95e1df46ab125e0f325a | 2021-09-17T19:23:17.000Z | [
"pytorch",
"tf",
"deberta-v2",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-deberta-v2 | 1,991 | null | transformers | 1,366 | Entry not found |
lincoln/mbart-mlsum-automatic-summarization | 17a3a2e474932a90e664ef2c75c5e46ef964fc1a | 2021-09-07T08:21:55.000Z | [
"pytorch",
"tf",
"mbart",
"text2text-generation",
"fr",
"dataset:MLSUM",
"arxiv:2004.14900",
"transformers",
"summarization",
"bart",
"license:mit",
"autotrain_compatible"
] | summarization | false | lincoln | null | lincoln/mbart-mlsum-automatic-summarization | 1,991 | 3 | transformers | 1,367 | ---
language:
- fr
license: mit
datasets:
- MLSUM
pipeline_tag: "summarization"
widget:
- text: « La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs, ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été.
tags:
- summarization
- mbart
- bart
---
# Résumé automatique d'article de presses
Ce modèles est basé sur le modèle [`facebook/mbart-large-50`](https://huggingface.co/facebook/mbart-large-50) et été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. L'hypothèse à été faite que les chapeaux des articles faisaient de bon résumés de référence.
## Entrainement
Nous avons testé deux architecture de modèles (T5 et BART) avec des textes en entrée de 512 ou 1024 tokens. Finallement c'est le modèle BART avec 512 tokens qui à été retenu.
Il a été entrainé sur 2 epochs (~700K articles) sur une Tesla V100 (32 heures d'entrainement).
## Résultats

Nous avons comparé notre modèle (`mbart-large-512-full` sur le graphique) à deux références:
* MBERT qui correspond aux performances du modèle entrainé par l'équipe à l'origine de la base d'articles MLSUM
* Barthez qui est un autre modèle basé sur des articles de presses issus de la base de données OrangeSum
On voit que le score de novelty (cf papier MLSUM) de notre modèle n'est pas encore comparable à ces deux références et encore moins à une production humaine néanmoins les résumés générés sont dans l'ensemble de bonne qualité.
## Utilisation
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import SummarizationPipeline
model_name = 'lincoln/mbart-mlsum-automatic-summarization'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
nlp = SummarizationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("""
« La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail.
Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple.
Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet,
dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet,
donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement.
Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé.
Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020,
quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs,
ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures.
D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été.
""")
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
hf-internal-testing/tiny-random-camembert | 8fa65c628a3f475b1ed4e8dff6adf09db1b6bb83 | 2022-07-27T10:07:32.000Z | [
"pytorch",
"camembert",
"feature-extraction",
"transformers"
] | feature-extraction | false | hf-internal-testing | null | hf-internal-testing/tiny-random-camembert | 1,990 | null | transformers | 1,368 | Entry not found |
microsoft/DialogRPT-human-vs-rand | 7206b425c2c016dd5533e2a99e665ba3546e5ce0 | 2021-05-23T09:18:07.000Z | [
"pytorch",
"gpt2",
"text-classification",
"arxiv:2009.06978",
"transformers"
] | text-classification | false | microsoft | null | microsoft/DialogRPT-human-vs-rand | 1,986 | 1 | transformers | 1,369 | # Demo
Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
| Context | Response | `human_vs_rand` score |
| :------ | :------- | :------------: |
| I love NLP! | He is a great basketball player. | 0.027 |
| I love NLP! | Can you tell me how it works? | 0.754 |
| I love NLP! | Me too! | 0.631 |
The `human_vs_rand` score predicts how likely the response is corresponding to the given context, rather than a random response.
# DialogRPT-human-vs-rand
### Dialog Ranking Pretrained Transformers
> How likely a dialog response is upvoted 👍 and/or gets replied 💬?
This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict.
It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data.
It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates.
Quick Links:
* [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/)
* [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT)
* [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
We considered the following tasks and provided corresponding pretrained models.
|Task | Description | Pretrained model |
| :------------- | :----------- | :-----------: |
| **Human feedback** | **given a context and its two human responses, predict...**|
| `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) |
| `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) |
| `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) |
| **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** |
| `human_vs_rand`| ... a random human response | this model |
| `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) |
### Contact:
Please create an issue on [our repo](https://github.com/golsun/DialogRPT)
### Citation:
```
@inproceedings{gao2020dialogrpt,
title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
year={2020},
booktitle={EMNLP}
}
```
|
dbmdz/bert-base-italian-uncased | d91243bae3a97a72691e9a6bfdf5d9f8fa4be9e4 | 2021-05-19T15:00:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-base-italian-uncased | 1,984 | 2 | transformers | 1,370 | ---
language: it
license: mit
datasets:
- wikipedia
---
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli | 1b7b1b212ea53c7a64546076569fbb01c3df8fbd | 2022-07-28T16:24:07.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:multi_nli",
"dataset:anli",
"dataset:fever",
"dataset:lingnli",
"dataset:alisawuffles/WANLI",
"arxiv:2104.07179",
"arxiv:2111.09543",
"transformers",
"zero-shot-classification",
"license:mit",
"model-index"
] | zero-shot-classification | false | MoritzLaurer | null | MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli | 1,980 | 4 | transformers | 1,371 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
license: mit
metrics:
- accuracy
datasets:
- multi_nli
- anli
- fever
- lingnli
- alisawuffles/WANLI
pipeline_tag: zero-shot-classification
#- text-classification
#widget:
#- text: "I first thought that I really liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was not good."
model-index: # info: https://github.com/huggingface/hub-docs/blame/main/modelcard.md
- name: DeBERTa-v3-large-mnli-fever-anli-ling-wanli
results:
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: Natural Language Inference # Optional. Example: Speech Recognition
dataset:
type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: MultiNLI-matched # Required. A pretty name for the dataset. Example: Common Voice (French)
split: validation_matched # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0,912 # Required. Example: 20.90
#name: # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: Natural Language Inference # Optional. Example: Speech Recognition
dataset:
type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: MultiNLI-mismatched # Required. A pretty name for the dataset. Example: Common Voice (French)
split: validation_mismatched # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0,908 # Required. Example: 20.90
#name: # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: Natural Language Inference # Optional. Example: Speech Recognition
dataset:
type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: ANLI-all # Required. A pretty name for the dataset. Example: Common Voice (French)
split: test_r1+test_r2+test_r3 # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0,702 # Required. Example: 20.90
#name: # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: Natural Language Inference # Optional. Example: Speech Recognition
dataset:
type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: ANLI-r3 # Required. A pretty name for the dataset. Example: Common Voice (French)
split: test_r3 # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0,64 # Required. Example: 20.90
#name: # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: Natural Language Inference # Optional. Example: Speech Recognition
dataset:
type: alisawuffles/WANLI # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: WANLI # Required. A pretty name for the dataset. Example: Common Voice (French)
split: test # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0,77 # Required. Example: 20.90
#name: # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: Natural Language Inference # Optional. Example: Speech Recognition
dataset:
type: lingnli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: LingNLI # Required. A pretty name for the dataset. Example: Common Voice (French)
split: test # Optional. Example: test
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0,87 # Required. Example: 20.90
#name: # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
---
# DeBERTa-v3-large-mnli-fever-anli-ling-wanli
## Model description
This model was fine-tuned on the [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), Adversarial-NLI ([ANLI](https://huggingface.co/datasets/anli)), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) datasets, which comprise 885 242 NLI hypothesis-premise pairs. This model is the best performing NLI model on the Hugging Face Hub as of 06.06.22 and can be used for zero-shot classification. It significantly outperforms all other large models on the [ANLI benchmark](https://github.com/facebookresearch/anli).
The foundation model is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-v3-large). DeBERTa-v3 combines several recent innovations compared to classical Masked Language Models like BERT, RoBERTa etc., see the [paper](https://arxiv.org/abs/2111.09543)
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was not good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
DeBERTa-v3-large-mnli-fever-anli-ling-wanli was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), Adversarial-NLI ([ANLI](https://huggingface.co/datasets/anli)), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) datasets, which comprise 885 242 NLI hypothesis-premise pairs. Note that [SNLI](https://huggingface.co/datasets/snli) was explicitly excluded due to quality issues with the dataset. More data does not necessarily make for better NLI models.
### Training procedure
DeBERTa-v3-large-mnli-fever-anli-ling-wanli was trained using the Hugging Face trainer with the following hyperparameters. Note that longer training with more epochs hurt performance in my tests (overfitting).
```
training_args = TrainingArguments(
num_train_epochs=4, # total number of training epochs
learning_rate=5e-06,
per_device_train_batch_size=16, # batch size per device during training
gradient_accumulation_steps=2, # doubles the effective batch_size to 32, while decreasing memory requirements
per_device_eval_batch_size=64, # batch size for evaluation
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the test sets for MultiNLI, ANLI, LingNLI, WANLI and the dev set for Fever-NLI. The metric used is accuracy.
The model achieves state-of-the-art performance on each dataset. Surprisingly, it outperforms the previous [state-of-the-art on ANLI](https://github.com/facebookresearch/anli) (ALBERT-XXL) by 8,3%. I assume that this is because ANLI was created to fool masked language models like RoBERTa (or ALBERT), while DeBERTa-v3 uses a better pre-training objective (RTD), disentangled attention and I fine-tuned it on higher quality NLI data.
|Datasets|mnli_test_m|mnli_test_mm|anli_test|anli_test_r3|ling_test|wanli_test|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.912|0.908|0.702|0.64|0.87|0.77|
|Speed (text/sec, A100 GPU)|696.0|697.0|488.0|425.0|828.0|980.0|
## Limitations and bias
Please consult the original DeBERTa-v3 paper and literature on different NLI datasets for more information on the training data and potential biases. The model will reproduce statistical patterns in the training data.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
|
kamalkraj/bioelectra-base-discriminator-pubmed | b08ce00d5a23e2682a20e9d33356730530bdecd1 | 2021-09-07T13:52:16.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | kamalkraj | null | kamalkraj/bioelectra-base-discriminator-pubmed | 1,978 | 3 | transformers | 1,372 | ## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/).
Cite our paper using below citation
```
@inproceedings{kanakarajan-etal-2021-bioelectra,
title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators",
author = "Kanakarajan, Kamal raj and
Kundumani, Bhuvana and
Sankarasubbu, Malaikannan",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.bionlp-1.16",
doi = "10.18653/v1/2021.bionlp-1.16",
pages = "143--154",
abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.",
}
```
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()]
``` |
mbartolo/roberta-large-synqa | 1ae8322fd562c2b2193a7d2b8d0887177b616d62 | 2022-07-25T23:36:39.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:adversarial_qa",
"dataset:mbartolo/synQA",
"dataset:squad",
"arxiv:2002.00293",
"arxiv:2104.08678",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | mbartolo | null | mbartolo/roberta-large-synqa | 1,973 | null | transformers | 1,373 | ---
language:
- en
tags:
- question-answering
license: apache-2.0
datasets:
- adversarial_qa
- mbartolo/synQA
- squad
metrics:
- exact_match
- f1
model-index:
- name: mbartolo/roberta-large-synqa
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 89.6529
verified: true
- name: F1
type: f1
value: 94.8172
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 55.3333
verified: true
- name: F1
type: f1
value: 66.7464
verified: true
---
# Model Overview
This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
# Data
Training data: SQuAD + AdversarialQA
Evaluation data: SQuAD + AdversarialQA
# Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
# Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details. |
sberbank-ai/ruT5-large | 4d14102f32e730d68b1950bfaeb7a4988c978737 | 2021-09-28T15:56:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers",
"autotrain_compatible"
] | text2text-generation | false | sberbank-ai | null | sberbank-ai/ruT5-large | 1,962 | 7 | transformers | 1,374 | ---
language:
- ru
tags:
- PyTorch
- Transformers
thumbnail: "https://github.com/sberbank-ai/model-zoo"
---
# ruT5-large
Model was trained by [SberDevices](https://sberdevices.ru/) team.
* Task: `text2text generation`
* Type: `encoder-decoder`
* Tokenizer: `bpe`
* Dict size: `32 101 `
* Num Parameters: `737 M`
* Training Data Volume `300 GB`
|
mrm8488/GPT-2-finetuned-covid-bio-medrxiv | 9f18ece8499d11cd7e0679e14be9e32ac9148f5e | 2021-08-25T21:38:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/GPT-2-finetuned-covid-bio-medrxiv | 1,961 | null | transformers | 1,375 | ---
language: en
thumbnail:
widget:
- text: "Old people with COVID-19 tends to suffer"
---
# GPT-2 + bio/medrxiv files from CORD19: 🦠 ✍ ⚕
**GPT-2** fine-tuned on **biorxiv_medrxiv** files from [CORD-19](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge) dataset.
## Datasets details:
| Dataset | # Files |
| ---------------------- | ----- |
| biorxiv_medrxiv | 885 |
## Model training:
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
export TRAIN_FILE=/path/to/dataset/train.txt
python run_language_modeling.py \\n --model_type gpt2 \\n --model_name_or_path gpt2 \\n --do_train \\n --train_data_file $TRAIN_FILE \\n --num_train_epochs 4 \\n --output_dir model_output \\n --overwrite_output_dir \\n --save_steps 2000 \\n --per_gpu_train_batch_size 3
```
## Model in action / Example of usage: ✒
You can get the following script [here](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py)
```bash
python run_generation.py \\n --model_type gpt2 \\n --model_name_or_path mrm8488/GPT-2-finetuned-CORD19 \\n --length 200
```
```txt
👵👴🦠
# Input: Old people with COVID-19 tends to suffer
# Output: === GENERATED SEQUENCE 1 ===
Old people with COVID-19 tends to suffer more symptom onset time and death. It is well known that many people with COVID-19 have high homozygous ZIKV infection in the face of severe symptoms in both severe and severe cases.
The origin of Wuhan Fever was investigated by Prof. Shen Jiang at the outbreak of Wuhan Fever [34]. As Huanan Province is the epicenter of this outbreak, Huanan, the epicenter of epidemic Wuhan Fever, is the most potential location for the direct transmission of infection (source: Zhongzhen et al., 2020). A negative risk ratio indicates more frequent underlying signs in the people in Huanan Province with COVID-19 patients. Further analysis of reported Huanan Fever onset data in the past two years indicated that the intensity of exposure is the key risk factor for developing MERS-CoV infection in this region, especially among children and elderly. To be continued to develop infected patients would be a very important area for
```

> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
Jiva/xlm-roberta-large-it-mnli | c6e64469ec4aa17fedbd1b2522256f90a90b5b86 | 2021-12-10T14:56:38.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"it",
"dataset:multi_nli",
"dataset:glue",
"arxiv:1911.02116",
"transformers",
"tensorflow",
"license:mit",
"zero-shot-classification"
] | zero-shot-classification | false | Jiva | null | Jiva/xlm-roberta-large-it-mnli | 1,960 | 4 | transformers | 1,376 | ---
language: it
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- multi_nli
- glue
license: mit
pipeline_tag: zero-shot-classification
widget:
- text: "La seconda guerra mondiale vide contrapporsi, tra il 1939 e il 1945, le cosiddette potenze dell'Asse e gli Alleati che, come già accaduto ai belligeranti della prima guerra mondiale, si combatterono su gran parte del pianeta; il conflitto ebbe inizio il 1º settembre 1939 con l'attacco della Germania nazista alla Polonia e terminò, nel teatro europeo, l'8 maggio 1945 con la resa tedesca e, in quello asiatico, il successivo 2 settembre con la resa dell'Impero giapponese dopo i bombardamenti atomici di Hiroshima e Nagasaki."
candidate_labels: "guerra, storia, moda, cibo"
multi_class: true
---
# XLM-roBERTa-large-it-mnli
## Version 0.1
| | matched-it acc | mismatched-it acc |
| -------------------------------------------------------------------------------- |----------------|-------------------|
| XLM-roBERTa-large-it-mnli | 84.75 | 85.39 |
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a subset of NLI data taken from a automatically translated version of the MNLI corpus. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification of italian texts.
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Jiva/xlm-roberta-large-it-mnli", device=0, use_fast=True, multi_label=True)
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the following wikipedia entry about Sardinia"
sequence_to_classify = "La Sardegna è una regione italiana a statuto speciale di 1 592 730 abitanti con capoluogo Cagliari, la cui denominazione bilingue utilizzata nella comunicazione ufficiale è Regione Autonoma della Sardegna / Regione Autònoma de Sardigna."
# we can specify candidate labels in Italian:
candidate_labels = ["geografia", "politica", "macchine", "cibo", "moda"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['geografia', 'moda', 'politica', 'macchine', 'cibo'],
# 'scores': [0.38871392607688904, 0.22633370757102966, 0.19398456811904907, 0.13735772669315338, 0.13708525896072388]}
```
The default hypothesis template is the English, `This text is {}`. With this model better results are achieving when providing a translated template:
```python
sequence_to_classify = "La Sardegna è una regione italiana a statuto speciale di 1 592 730 abitanti con capoluogo Cagliari, la cui denominazione bilingue utilizzata nella comunicazione ufficiale è Regione Autonoma della Sardegna / Regione Autònoma de Sardigna."
candidate_labels = ["geografia", "politica", "macchine", "cibo", "moda"]
hypothesis_template = "si parla di {}"
# classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# 'scores': [0.6068345904350281, 0.34715887904167175, 0.32433947920799255, 0.3068877160549164, 0.18744681775569916]}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('Jiva/xlm-roberta-large-it-mnli')
tokenizer = AutoTokenizer.from_pretrained('Jiva/xlm-roberta-large-it-mnli')
premise = sequence
hypothesis = f'si parla di {}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
## Version 0.1
The model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.
| metric | value |
|----------------- |------- |
| learning_rate | 4e-6 |
| optimizer | AdamW |
| batch_size | 80 |
| mcc | 0.77 |
| train_loss | 0.34 |
| eval_loss | 0.40 |
| stopped_at_step | 9754 |
## Version 0.0
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training. |
ltgoslo/norbert | 44815f7e109b53547cccdf3c6847f4c28b989816 | 2022-03-25T16:02:00.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"no",
"arxiv:2104.06546",
"transformers",
"norwegian",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | ltgoslo | null | ltgoslo/norbert | 1,956 | 6 | transformers | 1,377 | ---
language: no
license: cc-by-4.0
pipeline_tag: fill-mask
tags:
- norwegian
- bert
thumbnail: https://raw.githubusercontent.com/ltgoslo/NorBERT/main/Norbert.png
---
## Quickstart
**Release 1.1** (February 13, 2021)
Please check also our newer model: [NorBERT 2](https://huggingface.co/ltgoslo/norbert2), trained on a much larger corpus.
Download the model here:
* Cased Norwegian BERT Base: [216.zip](http://vectors.nlpl.eu/repository/20/216.zip)
More about NorBERT training corpora and training procedure: http://norlm.nlpl.eu/
Associated code: https://github.com/ltgoslo/NorBERT
Check this paper for more details:
_Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen. [Large-Scale Contextualised Language Modelling for Norwegian](https://arxiv.org/abs/2104.06546), NoDaLiDa'21 (2021)_
NorBERT was trained as a part of NorLM, a joint initiative of the projects [EOSC-Nordic](https://www.eosc-nordic.eu/) (European Open Science Cloud) and [SANT](https://www.mn.uio.no/ifi/english/research/projects/sant/index.html) (Sentiment Analysis for Norwegian),
coordinated by the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo.
The computations were performed on resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. |
hf-internal-testing/test_dynamic_model_with_util | b731e5fae6d80a4a775461251c4388886fb7a249 | 2022-01-26T17:54:17.000Z | [
"pytorch",
"new-model",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/test_dynamic_model_with_util | 1,953 | null | transformers | 1,378 | Entry not found |
facebook/wav2vec2-large-960h-lv60 | 8e7d14742e8f98c6bbb24e5231406af321a8f9ce | 2022-04-05T16:42:07.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-960h-lv60 | 1,947 | 5 | transformers | 1,379 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
model-index:
- name: wav2vec2-large-960h-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
args: en
metrics:
- name: Test WER
type: wer
value: 2.2
---
# Wav2Vec2-Large-960h-Lv60
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.2 | 4.5 | |
nikokons/gpt2-greek | b2bb85c722742ce6ea0b9e025d50425e061181c8 | 2022-07-20T09:59:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"el",
"transformers"
] | text-generation | false | nikokons | null | nikokons/gpt2-greek | 1,944 | null | transformers | 1,380 | ---
language: el
---
## gpt2-greek
## Dataset:
The model is trained on a collection of almost 5GB Greek texts, with the main source to be from Greek Wikipedia. The content is extracted using the Wikiextractor tool (Attardi, 2012). The dataset is constructed as 5 sentences per sample (about 3.7 millions of samples) and the end of document is marked with the string <|endoftext|> providing the model with paragraph information, as done for the original GPT-2 training set by Radford . The input sentences are pre-processed and tokenized using 22,000 merges of byte-pair encoding.
## Model:
The model is the "small" version of GPT-2 (12-layer, 768-hidden, 12-heads) with the only difference that the maximum sequence length is set at 512 tokens instead of 1024.
## Training details:
It is trained from scratch a generative Transformer model as GPT-2 on a large corpus of Greek text so that the model can generate long stretches of contiguous coherent text. Attention dropouts with a rate of 0.1 are used for regularization on all layers and L2 weight decay of 0,01. In addition, a batch size of 4 and accumulated gradients over 8 iterations are used, resulting in an effective batch size of 32. The model uses the Adam optimization scheme with a learning rate of 1e-4 and is trained for 20 epochs. The learning rate increases linearly from zero over the first 9000 updates and decreases linearly by using a linear schedule. The implementation is based on the open-source PyTorch-transformer library (HuggingFace 2019).
|
nghuyong/ernie-gram-zh | 257fee0915f1cba8dbea92c976493dcdd0491174 | 2022-04-04T06:00:26.000Z | [
"pytorch",
"bert",
"feature-extraction",
"zh",
"arxiv:2010.12148",
"transformers"
] | feature-extraction | false | nghuyong | null | nghuyong/ernie-gram-zh | 1,943 | null | transformers | 1,381 | ---
language: zh
---
# ERNIE-Gram-zh
## Introduction
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
More detail: https://arxiv.org/abs/2010.12148
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-gram-zh| Chinese |Layer:12, Hidden:768, Heads:12|
This released Pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/ERNIE
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-gram-zh")
model = AutoModel.from_pretrained("nghuyong/ernie-gram-zh")
``` |
monologg/distilkobert | cfbce1328041f68781414250c9013128e77e82d2 | 2020-05-13T03:37:29.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | monologg | null | monologg/distilkobert | 1,941 | 2 | transformers | 1,382 | Entry not found |
sentence-transformers/msmarco-distilbert-base-dot-prod-v3 | 0bafe057815532ca7ee37f002d9d1413d78b6d67 | 2022-06-15T22:20:51.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/msmarco-distilbert-base-dot-prod-v3 | 1,936 | 1 | sentence-transformers | 1,383 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/msmarco-distilbert-base-dot-prod-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-dot-prod-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-dot-prod-v3)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
cahya/bert-base-indonesian-522M | 7baa8f5fa385e6eff31184f11876d0d19bf5eb6c | 2021-05-19T13:38:45.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"id",
"dataset:wikipedia",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | cahya | null | cahya/bert-base-indonesian-522M | 1,934 | 3 | transformers | 1,384 | ---
language: "id"
license: "mit"
datasets:
- wikipedia
widget:
- text: "Ibu ku sedang bekerja [MASK] sawah."
---
# Indonesian BERT base model (uncased)
## Model description
It is BERT-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This
model is uncased: it does not make a difference between indonesia and Indonesia.
This is one of several other language models that have been pre-trained with indonesian datasets. More detail about
its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='cahya/bert-base-indonesian-522M')
>>> unmasker("Ibu ku sedang bekerja [MASK] supermarket")
[{'sequence': '[CLS] ibu ku sedang bekerja di supermarket [SEP]',
'score': 0.7983310222625732,
'token': 1495},
{'sequence': '[CLS] ibu ku sedang bekerja. supermarket [SEP]',
'score': 0.090003103017807,
'token': 17},
{'sequence': '[CLS] ibu ku sedang bekerja sebagai supermarket [SEP]',
'score': 0.025469014421105385,
'token': 1600},
{'sequence': '[CLS] ibu ku sedang bekerja dengan supermarket [SEP]',
'score': 0.017966199666261673,
'token': 1555},
{'sequence': '[CLS] ibu ku sedang bekerja untuk supermarket [SEP]',
'score': 0.016971781849861145,
'token': 1572}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
model_name='cahya/bert-base-indonesian-522M'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertModel.from_pretrained(model_name)
text = "Silakan diganti dengan text apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in Tensorflow:
```python
from transformers import BertTokenizer, TFBertModel
model_name='cahya/bert-base-indonesian-522M'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = TFBertModel.from_pretrained(model_name)
text = "Silakan diganti dengan text apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
This model was pre-trained with 522MB of indonesian Wikipedia.
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```[CLS] Sentence A [SEP] Sentence B [SEP]```
|
phiyodr/bert-base-finetuned-squad2 | c73e3f22381ce4c230b49844ea7b8c703887385c | 2021-05-20T02:34:19.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"dataset:squad2",
"arxiv:1810.04805",
"arxiv:1806.03822",
"transformers",
"autotrain_compatible"
] | question-answering | false | phiyodr | null | phiyodr/bert-base-finetuned-squad2 | 1,934 | null | transformers | 1,385 | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
metrics:
- exact
- f1
widget:
- text: "What discipline did Winkelmann create?"
context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art."
---
# bert-base-finetuned-squad2
## Model description
This model is based on **[bert-base-uncased](https://huggingface.co/bert-base-uncased)** and was finetuned on **[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/)**. The corresponding papers you can found [here (model)](https://arxiv.org/abs/1810.04805) and [here (data)](https://arxiv.org/abs/1806.03822).
## How to use
```python
from transformers.pipelines import pipeline
model_name = "phiyodr/bert-base-finetuned-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'What discipline did Winkelmann create?',
'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
}
nlp(inputs)
```
## Training procedure
```
{
"base_model": "bert-base-uncased",
"do_lower_case": True,
"learning_rate": 3e-5,
"num_train_epochs": 4,
"max_seq_length": 384,
"doc_stride": 128,
"max_query_length": 64,
"batch_size": 96
}
```
## Eval results
- Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
- Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md))
```
{
"exact": 70.3950138970774,
"f1": 73.90527661873521,
"total": 11873,
"HasAns_exact": 71.4574898785425,
"HasAns_f1": 78.48808186475087,
"HasAns_total": 5928,
"NoAns_exact": 69.33557611438184,
"NoAns_f1": 69.33557611438184,
"NoAns_total": 5945
}
```
|
microsoft/xlm-align-base | 3e2a40ea5f9c75353ad2769bd74f7cb425fce671 | 2021-08-04T15:23:10.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | microsoft | null | microsoft/xlm-align-base | 1,932 | 3 | transformers | 1,386 | # XLM-Align
**XLM-Align** (ACL 2021, [paper](https://aclanthology.org/2021.acl-long.265/), [repo](https://github.com/CZWin32768/XLM-Align), [model](https://huggingface.co/microsoft/xlm-align-base)) Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our [paper](https://aclanthology.org/2021.acl-long.265/).
## Example
```
model = AutoModel.from_pretrained("microsoft/xlm-align-base")
```
## Evaluation Results
XTREME cross-lingual understanding tasks:
| Model | POS | NER | XQuAD | MLQA | TyDiQA | XNLI | PAWS-X | Avg |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| XLM-R_base | 75.6 | 61.8 | 71.9 / 56.4 | 65.1 / 47.2 | 55.4 / 38.3 | 75.0 | 84.9 | 66.4 |
| XLM-Align | **76.0** | **63.7** | **74.7 / 59.0** | **68.1 / 49.8** | **62.1 / 44.8** | **76.2** | **86.8** | **68.9** |
## MD5
```
b9d214025837250ede2f69c9385f812c config.json
6005db708eb4bab5b85fa3976b9db85b pytorch_model.bin
bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model
eedbd60a7268b9fc45981b849664f747 tokenizer.json
```
## About
Contact: chizewen\@outlook.com
BibTeX:
```
@inproceedings{xlmalign,
title = "Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment",
author={Zewen Chi and Li Dong and Bo Zheng and Shaohan Huang and Xian-Ling Mao and Heyan Huang and Furu Wei},
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.265",
doi = "10.18653/v1/2021.acl-long.265",
pages = "3418--3430",}
``` |
Alireza1044/albert-base-v2-sst2 | e406771b99e1913921a68fbb95d121b582d1ecb7 | 2021-07-26T14:02:35.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | Alireza1044 | null | Alireza1044/albert-base-v2-sst2 | 1,930 | null | transformers | 1,387 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.9231651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3808
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
google/bert_uncased_L-2_H-256_A-4 | 4e937a8675e5afd9a4836735c186ec01695bc3ea | 2021-05-19T17:28:46.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-2_H-256_A-4 | 1,928 | 1 | transformers | 1,388 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
indobenchmark/indobart | 73bead20e4a67f578f6f3b3f7038040304dc7065 | 2022-06-21T17:52:16.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"id",
"dataset:Indo4B+",
"arxiv:2104.08200",
"transformers",
"indogpt",
"indobenchmark",
"indonlg",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | indobenchmark | null | indobenchmark/indobart | 1,923 | 1 | transformers | 1,389 | ---
language: id
tags:
- indogpt
- indobenchmark
- indonlg
license: mit
inference: false
datasets:
- Indo4B+
---
# IndoBART Model
[IndoBART](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart` | 132M | Indo4B-Plus (23.79 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
microsoft/CodeGPT-small-py | 97ebaaa7103f64e3085e88f0ecd28d1ffeb01bea | 2021-05-23T09:01:50.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | microsoft | null | microsoft/CodeGPT-small-py | 1,922 | 1 | transformers | 1,390 | Entry not found |
saattrupdan/nbailab-base-ner-scandi | 8635b40703c27f868a29a36d99e264facddc6610 | 2022-02-09T15:21:05.000Z | [
"pytorch",
"bert",
"token-classification",
"da",
"no",
"nb",
"nn",
"sv",
"fo",
"is",
"dataset:dane",
"dataset:norne",
"dataset:wikiann",
"dataset:suc3.0",
"arxiv:1911.12146",
"transformers",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | saattrupdan | null | saattrupdan/nbailab-base-ner-scandi | 1,921 | 8 | transformers | 1,391 | ---
language:
- da
- no
- nb
- nn
- sv
- fo
- is
license: mit
datasets:
- dane
- norne
- wikiann
- suc3.0
model-index:
- name: nbailab-base-ner-scandi
results: []
widget:
- "Hans er en professor på Københavns Universitetet i København, og han er en rigtig københavner. Hans kat, altså Hans' kat, Lisa, er supersød. Han fik købt en Mona Lisa på tilbud i Netto og gav den til sin kat, og nu er Mona Lisa'en Lisa's kæreste eje. Hans bror Peter og Hans besluttede, at Peterskirken skulle have fint besøg. Men nu har de begge Corona."
inference:
parameters:
aggregation_strategy: "first"
---
# ScandiNER - Named Entity Recognition model for Scandinavian Languages
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) for Named Entity Recognition for Danish, Norwegian (both Bokmål and Nynorsk), Swedish, Icelandic and Faroese. It has been fine-tuned on the concatenation of [DaNE](https://aclanthology.org/2020.lrec-1.565/), [NorNE](https://arxiv.org/abs/1911.12146), [SUC 3.0](https://spraakbanken.gu.se/en/resources/suc3) and the Icelandic and Faroese parts of the [WikiANN](https://aclanthology.org/P17-1178/) dataset. It also works reasonably well on English sentences, given the fact that the pretrained model is also trained on English data along with Scandinavian languages.
The model will predict the following four entities:
| **Tag** | **Name** | **Description** |
| :------ | :------- | :-------------- |
| `PER` | Person | The name of a person (e.g., *Birgitte* and *Mohammed*) |
| `LOC` | Location | The name of a location (e.g., *Tyskland* and *Djurgården*) |
| `ORG` | Organisation | The name of an organisation (e.g., *Bunnpris* and *Landsbankinn*) |
| `MISC` | Miscellaneous | A named entity of a different kind (e.g., *Ūjķnustu pund* and *Mona Lisa*) |
## Quick start
You can use this model in your scripts as follows:
```python
>>> from transformers import pipeline
>>> import pandas as pd
>>> ner = pipeline(task='ner',
... model='saattrupdan/nbailab-base-ner-scandi',
... aggregation_strategy='first')
>>> result = ner('Borghild kjøper seg inn i Bunnpris')
>>> pd.DataFrame.from_records(result)
entity_group score word start end
0 PER 0.981257 Borghild 0 8
1 ORG 0.974099 Bunnpris 26 34
```
## Performance
The following is the Micro-F1 NER performance on Scandinavian NER test datasets, compared with the current state-of-the-art. The models have been evaluated on the test set along with 9 bootstrapped versions of it, with the mean and 95% confidence interval shown here:
| **Model ID** | **DaNE** | **NorNE-NB** | **NorNE-NN** | **SUC 3.0** | **WikiANN-IS** | **WikiANN-FO** | **Average** |
| :----------- | -------: | -----------: | -----------: | ----------: | -------------: | -------------: | ----------: |
| saattrupdan/nbailab-base-ner-scandi | **87.44 ± 0.81** | **91.06 ± 0.26** | **90.42 ± 0.61** | **88.37 ± 0.17** | **88.61 ± 0.41** | **90.22 ± 0.46** | **89.08 ± 0.46** |
| chcaa/da\_dacy\_large\_trf | 83.61 ± 1.18 | 78.90 ± 0.49 | 72.62 ± 0.58 | 53.35 ± 0.17 | 50.57 ± 0.46 | 51.72 ± 0.52 | 63.00 ± 0.57 |
| RecordedFuture/Swedish-NER | 64.09 ± 0.97 | 61.74 ± 0.50 | 56.67 ± 0.79 | 66.60 ± 0.27 | 34.54 ± 0.73 | 42.16 ± 0.83 | 53.32 ± 0.69 |
| Maltehb/danish-bert-botxo-ner-dane | 69.25 ± 1.17 | 60.57 ± 0.27 | 35.60 ± 1.19 | 38.37 ± 0.26 | 21.00 ± 0.57 | 27.88 ± 0.48 | 40.92 ± 0.64 |
| Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane | 70.41 ± 1.19 | 48.76 ± 0.70 | 27.58 ± 0.61 | 35.39 ± 0.38 | 26.22 ± 0.52 | 28.30 ± 0.29 | 39.70 ± 0.61 |
| radbrt/nb\_nocy\_trf | 56.82 ± 1.63 | 68.20 ± 0.75 | 69.22 ± 1.04 | 31.63 ± 0.29 | 20.32 ± 0.45 | 12.91 ± 0.50 | 38.08 ± 0.75 |
Aside from its high accuracy, it's also substantially **smaller** and **faster** than the previous state-of-the-art:
| **Model ID** | **Samples/second** | **Model size** |
| :----------- | -----------------: | -------------: |
| saattrupdan/nbailab-base-ner-scandi | 4.16 ± 0.18 | 676 MB |
| chcaa/da\_dacy\_large\_trf | 0.65 ± 0.01 | 2,090 MB |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 90135.90000000001
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Micro F1 No Misc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------------:|
| 0.6682 | 1.0 | 2816 | 0.0872 | 0.6916 | 0.7306 |
| 0.0684 | 2.0 | 5632 | 0.0464 | 0.8167 | 0.8538 |
| 0.0444 | 3.0 | 8448 | 0.0367 | 0.8485 | 0.8783 |
| 0.0349 | 4.0 | 11264 | 0.0316 | 0.8684 | 0.8920 |
| 0.0282 | 5.0 | 14080 | 0.0290 | 0.8820 | 0.9033 |
| 0.0231 | 6.0 | 16896 | 0.0283 | 0.8854 | 0.9060 |
| 0.0189 | 7.0 | 19712 | 0.0253 | 0.8964 | 0.9156 |
| 0.0155 | 8.0 | 22528 | 0.0260 | 0.9016 | 0.9201 |
| 0.0123 | 9.0 | 25344 | 0.0266 | 0.9059 | 0.9233 |
| 0.0098 | 10.0 | 28160 | 0.0280 | 0.9091 | 0.9279 |
| 0.008 | 11.0 | 30976 | 0.0309 | 0.9093 | 0.9287 |
| 0.0065 | 12.0 | 33792 | 0.0313 | 0.9103 | 0.9284 |
| 0.0053 | 13.0 | 36608 | 0.0322 | 0.9078 | 0.9257 |
| 0.0046 | 14.0 | 39424 | 0.0343 | 0.9075 | 0.9256 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
studio-ousia/luke-large-finetuned-conll-2003 | 2508abe6e591d7a9142d5ee9ab2eb5dccd7741fd | 2021-04-26T16:09:42.000Z | [
"pytorch",
"luke",
"transformers"
] | null | false | studio-ousia | null | studio-ousia/luke-large-finetuned-conll-2003 | 1,920 | null | transformers | 1,392 | Entry not found |
m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0 | 4454cfbc82952da79729e33e81c37a72dc095b4b | 2021-05-19T22:20:54.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | m-polignano-uniba | null | m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0 | 1,917 | 3 | transformers | 1,393 | Entry not found |
klue/roberta-small | f360e3d753b17f3b7508154fefdb042c706db147 | 2021-10-20T16:13:01.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ko",
"arxiv:2105.09680",
"transformers",
"korean",
"klue",
"autotrain_compatible"
] | fill-mask | false | klue | null | klue/roberta-small | 1,911 | null | transformers | 1,394 | ---
language: ko
tags:
- korean
- klue
mask_token: "[MASK]"
widget:
- text: 대한민국의 수도는 [MASK] 입니다.
---
# KLUE RoBERTa small
Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details.
## How to use
_NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/roberta-small")
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-small")
```
## BibTeX entry and citation info
```bibtex
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
monologg/koelectra-base-v3-naver-ner | 7fe2d3297113e0753716d7f2c85d4880d288542d | 2020-11-30T11:55:35.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | monologg | null | monologg/koelectra-base-v3-naver-ner | 1,911 | null | transformers | 1,395 | Entry not found |
hf-internal-testing/tiny-random-rembert | 917e1c9f997b17fc81d9ed84713f5de8abe57c1b | 2022-03-08T13:50:53.000Z | [
"pytorch",
"tf",
"rembert",
"feature-extraction",
"transformers",
"generated_from_keras_callback",
"model-index"
] | feature-extraction | false | hf-internal-testing | null | hf-internal-testing/tiny-random-rembert | 1,909 | null | transformers | 1,396 | ---
tags:
- generated_from_keras_callback
model-index:
- name: tiny-random-rembert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tiny-random-rembert
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
textattack/bert-base-uncased-MNLI | 3a97f689528cbd91bcc71ab29ea6c20c089d8f28 | 2021-05-20T07:31:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/bert-base-uncased-MNLI | 1,908 | null | transformers | 1,397 | Entry not found |
google/mt5-xxl | d4ac5e6d5125f8d30cba8763cd0ad71e5d34c17b | 2022-05-27T15:06:56.000Z | [
"pytorch",
"tf",
"mt5",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"arxiv:2010.11934",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/mt5-xxl | 1,906 | 8 | transformers | 1,398 | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Abstract
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. |
ml6team/keyphrase-extraction-kbir-inspec | 70c7250d0cb932f4ee3332c50a73583b7cd7995d | 2022-06-16T14:51:11.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"dataset:midas/inspec",
"arxiv:2112.08547",
"transformers",
"keyphrase-extraction",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ml6team | null | ml6team/keyphrase-extraction-kbir-inspec | 1,906 | 2 | transformers | 1,399 | ---
language: en
license: mit
tags:
- keyphrase-extraction
datasets:
- midas/inspec
metrics:
- seqeval
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks."
example_title: "Example 2"
model-index:
- name: DeDeckerThomas/keyphrase-extraction-kbir-inspec
results:
- task:
type: keyphrase-extraction
name: Keyphrase Extraction
dataset:
type: midas/inspec
name: inspec
metrics:
- type: F1 (Seqeval)
value: 0.588
name: F1 (Seqeval)
- type: F1@M
value: 0.564
name: F1@M
---
# 🔑 Keyphrase Extraction Model: KBIR-inspec
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [Inspec dataset](https://huggingface.co/datasets/midas/inspec). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547).
Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not.
| Label | Description |
| ----- | ------------------------------- |
| B-KEY | At the beginning of a keyphrase |
| I-KEY | Inside a keyphrase |
| O | Outside a keyphrase |
Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021).
Sahrawat, Dhruva, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. "Keyphrase extraction as sequence labeling using contextualized embeddings." In European Conference on Information Retrieval, pp. 328-335. Springer, Cham, 2020.
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* This keyphrase extraction model is very domain-specific and will perform very well on abstracts of scientific papers. It's not recommended to use this model for other domains, but you are free to test it out.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
### ❓ How To Use
```python
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs,
aggregation_strategy=AggregationStrategy.SIMPLE,
)
return np.unique([result.get("word").strip() for result in results])
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-extraction-kbir-inspec"
extractor = KeyphraseExtractionPipeline(model=model_name)
```
```python
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = extractor(text)
print(keyphrases)
```
```
# Output
['Artificial Intelligence' 'Keyphrase extraction' 'deep learning'
'linguistic features' 'machine learning' 'semantic meaning'
'text analysis']
```
## 📚 Training Dataset
[Inspec](https://huggingface.co/datasets/midas/inspec) is a keyphrase extraction/generation dataset consisting of 2000 English scientific papers from the scientific domains of Computers and Control and Information Technology published between 1998 to 2002. The keyphrases are annotated by professional indexers or editors.
You can find more information in the [paper](https://dl.acm.org/doi/10.3115/1119355.1119383).
## 👷♂️ Training Procedure
For more in detail information, you can take a look at the [training notebook]().
### Training Parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 1e-4 |
| Epochs | 50 |
| Early Stopping Patience | 3 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Labels
label_list = ["B", "I", "O"]
lbl2idx = {"B": 0, "I": 1, "O": 2}
idx2label = {0: "B", 1: "I", 2: "O"}
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR", add_prefix_space=True)
max_length = 512
# Dataset parameters
dataset_full_name = "midas/inspec"
dataset_subset = "raw"
dataset_document_column = "document"
dataset_biotags_column = "doc_bio_tags"
def preprocess_fuction(all_samples_per_split):
tokenized_samples = tokenizer.batch_encode_plus(
all_samples_per_split[dataset_document_column],
padding="max_length",
truncation=True,
is_split_into_words=True,
max_length=max_length,
)
total_adjusted_labels = []
for k in range(0, len(tokenized_samples["input_ids"])):
prev_wid = -1
word_ids_list = tokenized_samples.word_ids(batch_index=k)
existing_label_ids = all_samples_per_split[dataset_biotags_column][k]
i = -1
adjusted_label_ids = []
for wid in word_ids_list:
if wid is None:
adjusted_label_ids.append(lbl2idx["O"])
elif wid != prev_wid:
i = i + 1
adjusted_label_ids.append(lbl2idx[existing_label_ids[i]])
prev_wid = wid
else:
adjusted_label_ids.append(
lbl2idx[
f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}"
]
)
total_adjusted_labels.append(adjusted_label_ids)
tokenized_samples["labels"] = total_adjusted_labels
return tokenized_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing (Without Pipeline Function)
If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed.
```python
# Define post_process functions
def concat_tokens_by_tag(keyphrases):
keyphrase_tokens = []
for id, label in keyphrases:
if label == "B":
keyphrase_tokens.append([id])
elif label == "I":
if len(keyphrase_tokens) > 0:
keyphrase_tokens[len(keyphrase_tokens) - 1].append(id)
return keyphrase_tokens
def extract_keyphrases(example, predictions, tokenizer, index=0):
keyphrases_list = [
(id, idx2label[label])
for id, label in zip(
np.array(example["input_ids"]).squeeze().tolist(), predictions[index]
)
if idx2label[label] in ["B", "I"]
]
processed_keyphrases = concat_tokens_by_tag(keyphrases_list)
extracted_kps = tokenizer.batch_decode(
processed_keyphrases,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
return np.unique([kp.strip() for kp in extracted_kps])
```
## 📝 Evaluation Results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases.
The model achieves the following results on the Inspec test set:
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|
| Inspec Test Set | 0.53 | 0.47 | 0.46 | 0.36 | 0.58 | 0.41 | 0.58 | 0.60 | 0.56 |
For more information on the evaluation process, you can take a look at the keyphrase extraction [evaluation notebook]().
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.