modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
nateraw/hot-dog | dc768435246614205e59fcd0412937c6bb116083 | 2021-07-01T05:31:18.000Z | [
"pytorch",
"detr",
"object-detection",
"transformers"
] | object-detection | false | nateraw | null | nateraw/hot-dog | 18 | null | transformers | 8,800 | ---
tags:
- object-detection
- pytorch
---
# hot-dog
Ignore me...I'm broken. |
neuropark/sahajBERT-NER | 126f3f6642ea9056fbc3901e6720827ff03a51e1 | 2021-06-15T08:12:18.000Z | [
"pytorch",
"albert",
"token-classification",
"bn",
"dataset:xtreme",
"transformers",
"collaborative",
"bengali",
"NER",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | neuropark | null | neuropark/sahajBERT-NER | 18 | 2 | transformers | 8,801 |
---
language: bn
tags:
- collaborative
- bengali
- NER
license: apache-2.0
datasets: xtreme
metrics:
- Loss
- Accuracy
- Precision
- Recall
---
# sahajBERT Named Entity Recognition
## Model description
[sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann).
Named Entities predicted by the model:
| Label id | Label |
|:--------:|:----:|
|0 |O|
|1 |B-PER|
|2 |I-PER|
|3 |B-ORG|
|4 |I-ORG|
|5 |B-LOC|
|6 |I-LOC|
## Intended uses & limitations
#### How to use
You can use this model directly with a pipeline for token classification:
```python
from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER")
# Initialize model
model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER")
# Initialize pipeline
pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model)
raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me
output = pipeline(raw_text)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann)
## Training procedure
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
## Eval results
loss: 0.11714419722557068
accuracy: 0.9772286821705426
precision: 0.9585365853658536
recall: 0.9651277013752456
f1 : 0.9618208516886931
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
|
nielsr/convnext-xlarge-224-22k-1k | 98e544a4f7a730d24dd472bd9ecf87f0694ca72e | 2022-02-22T12:35:38.000Z | [
"pytorch",
"convnext",
"image-classification",
"transformers"
] | image-classification | false | nielsr | null | nielsr/convnext-xlarge-224-22k-1k | 18 | null | transformers | 8,802 | Entry not found |
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast | 516f02edce4a408d4b46fe90b9c9e226cba842a0 | 2022-01-20T18:06:05.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | nntadotzip | null | nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast | 18 | null | transformers | 8,803 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 382 | 0.4695 |
| 0.5633 | 2.0 | 764 | 0.3361 |
| 0.3533 | 3.0 | 1146 | 0.3489 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nyu-mll/roberta-med-small-1M-2 | d57b4ce9b7d78f0980fcb2d43b2a272677871318 | 2021-05-20T19:07:56.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-med-small-1M-2 | 18 | null | transformers | 8,804 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
patrickvonplaten/reformer-tiny-random | b28e78c699eb382c5c533475a87f64f26394513b | 2021-05-20T02:18:13.000Z | [
"pytorch",
"bert",
"text-generation",
"transformers"
] | text-generation | false | patrickvonplaten | null | patrickvonplaten/reformer-tiny-random | 18 | null | transformers | 8,805 | Entry not found |
pere/norwegian-gpt2-vgd | a4e18964aa637471296c11a09b6491c5ebe009d2 | 2021-11-02T21:15:41.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"no",
"transformers",
"norwegian",
"GPT2",
"casual language modeling",
"license:cc-by-4.0"
] | text-generation | false | pere | null | pere/norwegian-gpt2-vgd | 18 | null | transformers | 8,806 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- GPT2
- casual language modeling
---
# Norwegian GPT-2 - Social
## Description
Private test of gpt fine-tuning based on vgd.
The following sub-corpora are used for the base model:
```bash
wikipedia_download_nb.jsonl
wikipedia_download_nn.jsonl
newspapers_online_nb.jsonl
newspapers_online_nn.jsonl
twitter_2016_2018_no.jsonl
twitter_news_2016_2018_no.jsonl
open_subtitles_no.jsonl
facebook_no.jsonl
reddit_no.jsonl
vgdebatt_no.jsonl
```
Finetuned on the private dataset located at NbAiLab/vgd.
|
pertschuk/albert-base-quora-classifier | 052bb0476fc6840b5e8ac59461e2709644597b61 | 2020-04-24T16:04:59.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | pertschuk | null | pertschuk/albert-base-quora-classifier | 18 | null | transformers | 8,807 | Entry not found |
philippelaban/summary_loop10 | 651e90be5498581fc2532b1a4cab085525e374aa | 2022-02-09T22:02:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"license:apache-2.0"
] | summarization | false | philippelaban | null | philippelaban/summary_loop10 | 18 | 2 | transformers | 8,808 | ---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
---
# Try out in the Hosted inference API
In the right panel, you can try to the model (although it only handles a short sequence length).
Enter the document you want to summarize in the panel on the right.
# Model Loading
The model (based on a GPT2 base architecture) can be loaded in the following way:
```
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop10")
tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop10")
```
# Example Use
```
document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?"
tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda()
input_shape = tokenized_document.shape
outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True)
candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary
candidate_scores = outputs.sequences_scores.tolist()
for candidate_tokens, score in zip(candidate_sequences, candidate_scores):
summary = tokenizer.decode(candidate_tokens)
print("[Score: %.3f] %s" % (score, summary[:summary.index("END")]))
```
# Example output
```
[Score: -0.084] Here's what you need to know about rockfalls
[Score: -0.087] Here's what you need to know about these tracks
[Score: -0.091] Here's what we know so far about these tracks
[Score: -0.101] Here's what you need to know about rockfall
```
# Github repo
You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop |
philschmid/mt5-small-prompted-germanquad-1 | 7e4252389899b17fb8d4659d9784c6c8ab506297 | 2021-12-24T11:10:03.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:philschmid/prompted-germanquad",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | philschmid | null | philschmid/mt5-small-prompted-germanquad-1 | 18 | null | transformers | 8,809 | ---
license: apache-2.0
tags:
- summarization
datasets:
- philschmid/prompted-germanquad
widget:
- text: |
Philipp ist 26 Jahre alt und lebt in Nürnberg, Deutschland. Derzeit arbeitet er als Machine Learning Engineer und Tech Lead bei Hugging Face, um künstliche Intelligenz durch Open Source und Open Science zu demokratisieren.
Welches Ziel hat Hugging Face?
metrics:
- rouge
model-index:
- name: mt5-small-prompted-germanquad-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-prompted-germanquad-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an [philschmid/prompted-germanquad](https://huggingface.co/datasets/philschmid/prompted-germanquad) dataset. A prompt datasets using the [BigScience PromptSource library](https://github.com/bigscience-workshop/promptsource). The dataset is a copy of [germanquad](https://huggingface.co/datasets/deepset/germanquad) with applying the `squad` template and translated it to german. [TEMPLATE](https://github.com/philschmid/promptsource/blob/main/promptsource/templates/germanquad/templates.yaml).
This is a first test if it is possible to fine-tune `mt5` models to solve similar tasks than `T0` of big science but for the German language.
It achieves the following results on the evaluation set:
- Loss: 1.6835
- Rouge1: 27.7309
- Rouge2: 18.7311
- Rougel: 27.4704
- Rougelsum: 27.4818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.3795 | 1.0 | 17496 | 2.0693 | 15.8652 | 9.2569 | 15.6237 | 15.6142 |
| 2.3582 | 2.0 | 34992 | 1.9057 | 21.9348 | 14.0057 | 21.6769 | 21.6825 |
| 2.1809 | 3.0 | 52488 | 1.8143 | 24.3401 | 16.0354 | 24.0862 | 24.0914 |
| 2.0721 | 4.0 | 69984 | 1.7563 | 25.8672 | 17.2442 | 25.5854 | 25.6051 |
| 2.0004 | 5.0 | 87480 | 1.7152 | 27.0275 | 18.0548 | 26.7561 | 26.7685 |
| 1.9531 | 6.0 | 104976 | 1.6939 | 27.4702 | 18.5156 | 27.2027 | 27.2107 |
| 1.9218 | 7.0 | 122472 | 1.6835 | 27.7309 | 18.7311 | 27.4704 | 27.4818 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
pinecone/mpnet-retriever-squad2 | cac1e06fed72fb1f81c9828d4eeb8a16621d7ebf | 2022-01-03T02:42:15.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | pinecone | null | pinecone/mpnet-retriever-squad2 | 18 | 2 | sentence-transformers | 8,810 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 5429 with parameters:
```
{'batch_size': 24}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 542,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
prajwalcr/poetry-surprise_gpt2 | 944d9ca68c75383097a8535fbe77519a6dcbe9b7 | 2021-08-03T10:04:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-surprise_gpt2 | 18 | null | transformers | 8,811 | Entry not found |
pucpr/biobertpt-bio | f02ec2f9c1687aa236c0e23fb00d452d0aacda76 | 2021-10-13T09:27:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"pt",
"dataset:biomedical literature from Scielo and Pubmed",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pucpr | null | pucpr/biobertpt-bio | 18 | 4 | transformers | 8,812 | ---
language: "pt"
widget:
- text: "O principal [MASK] da COVID-19 é tosse seca."
- text: "O vírus da gripe apresenta um [MASK] constituído por segmentos de ácido ribonucleico."
datasets:
- biomedical literature from Scielo and Pubmed
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# BioBERTpt - Portuguese Clinical and Biomedical BERT
The [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) paper contains clinical and biomedical BERT-based models for Portuguese Language, initialized with BERT-Multilingual-Cased & trained on clinical notes and biomedical literature.
This model card describes the BioBERTpt(bio) model, a biomedical version of BioBERTpt, trained on Portuguese biomedical literature from scientific papers from Pubmed and Scielo.
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("pucpr/biobertpt-bio")
model = AutoModel.from_pretrained("pucpr/biobertpt-bio")
```
## More Information
Refer to the original paper, [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) for additional details and performance on Portuguese NER tasks.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt). |
raynardj/roberta-pubmed | 58d63994a9357d5d2651fec4cab6804dbe9580be | 2021-10-08T02:58:27.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:pubmed",
"transformers",
"pubmed",
"cancer",
"gene",
"clinical trial",
"bioinformatic",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | raynardj | null | raynardj/roberta-pubmed | 18 | 1 | transformers | 8,813 | ---
language:
- en
tags:
- pubmed
- cancer
- gene
- clinical trial
- bioinformatic
license: apache-2.0
datasets:
- pubmed
widget:
- text: "The <mask> effects of hyperatomarin"
---
# Roberta-Base fine-tuned on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) Abstract
> We limit the training textual data to the following [MeSH](https://www.ncbi.nlm.nih.gov/mesh/)
* All the child MeSH of ```Biomarkers, Tumor(D014408)```, including things like ```Carcinoembryonic Antigen(D002272)```
* All the child MeSH of ```Carcinoma(D002277)```, including things like all kinds of carcinoma: like ```Carcinoma, Lewis Lung(D018827)``` etc. around 80 kinds of carcinoma
* All the child MeSH of ```Clinical Trial(D016439)```
* The training text file amounts to 531Mb
## Training
* Trained on language modeling task, with ```mlm_probability=0.15```, on 2 Tesla V100 32G
```python
training_args = TrainingArguments(
output_dir=config.save, #select model path for checkpoint
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=30,
per_device_eval_batch_size=60,
evaluation_strategy= 'steps',
save_total_limit=2,
eval_steps=250,
metric_for_best_model='eval_loss',
greater_is_better=False,
load_best_model_at_end =True,
prediction_loss_only=True,
report_to = "none")
``` |
salesken/content_generation_from_phrases | cc3700cab3cf3a99076f95b606574f96e59e2722 | 2021-05-23T12:23:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"salesken",
"license:apache-2.0"
] | text-generation | false | salesken | null | salesken/content_generation_from_phrases | 18 | null | transformers | 8,814 |
---
tags: salesken
license: apache-2.0
inference: false
---
We attempted an entailment-encouraging text generation model to generate content , given a short phrase .
Some the generated sentences like below, for the phrase "data science beginner", really got us excited about the potential applications:
<b> ['Where can I find a list of questions, tutorials, and resources for getting a data scientist job?
'Do you know of any research articles about how to improve your skills as a Data Science/Data Management Programmer? ',
'What are the pros and cons to having a Data Science/Data Mining Masters? '] .</b>
Utility of the model ? Automate your conversational AI training data creation process by feeding some meaningful phrases to the model , to generate entailment-encouraging sentences; select the most diverse sentences and generate semantic variations for these, using our paraphrase generation model (https://huggingface.co/salesken/paraphrase_generation), and rank the generated sentence encouraging diversity by using our NLG ranker model (https://huggingface.co/salesken/paraphrase_diversity_ranker)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import pprint
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/content_generation_from_phrases")
model = AutoModelWithLMHead.from_pretrained("salesken/content_generation_from_phrases").to(device)
input_query=["data science beginner"]
query = "<|startoftext|> " + input_query[0] + " ~~"
input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=256,
temperature=0.9,
top_k = 30,
num_return_sequences=100)
content = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ')[1]
if r not in content:
content.append(r)
pprint.pprint(content)
```
You may use our ranker model to rank the generated content to encourage diversity.
https://huggingface.co/salesken/paraphrase_diversity_ranker
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
import numpy as np
rank_tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_diversity_ranker")
rank_model = AutoModelForSequenceClassification.from_pretrained("salesken/paraphrase_diversity_ranker")
content_pairs=list(pd.MultiIndex.from_product([input_query, content]))
features = rank_tokenizer(content_pairs, padding=True, truncation=True, return_tensors="pt")
rank_model.eval()
with torch.no_grad():
scores = rank_model(**features).logits
label_mapping = ['surface_level_variation', 'semantic_variation']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
generated_content= np.array(content)[scores[:,1].sort(descending=True).indices].tolist()
```
|
textattack/roberta-base-WNLI | fcf1b6036509b5b0b43116873e3ba4b1da56a74e | 2021-05-20T22:13:50.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/roberta-base-WNLI | 18 | null | transformers | 8,815 | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-large-cased-CoLA | 4fb7b9627f837f36170be6fa8f37b5f95dcac9b0 | 2020-06-09T16:57:33.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/xlnet-large-cased-CoLA | 18 | null | transformers | 8,816 | Entry not found |
textattack/xlnet-large-cased-STS-B | 6d0282faa6cc66440a1dabc1111526d242a1c4c0 | 2020-06-09T16:59:30.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/xlnet-large-cased-STS-B | 18 | null | transformers | 8,817 | Entry not found |
tiennvcs/layoutlmv2-large-uncased-finetuned-infovqa | c2c1495c9e4e4963eaa8e95c303a9770ed6f6687 | 2021-11-09T13:42:04.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/layoutlmv2-large-uncased-finetuned-infovqa | 18 | 1 | transformers | 8,818 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-large-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-large-uncased-finetuned-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-large-uncased](https://huggingface.co/microsoft/layoutlmv2-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1829 | 0.08 | 500 | 3.6339 |
| 3.5002 | 0.16 | 1000 | 3.0721 |
| 2.9556 | 0.24 | 1500 | 2.8731 |
| 2.8939 | 0.33 | 2000 | 3.1566 |
| 2.6986 | 0.41 | 2500 | 3.1023 |
| 2.7569 | 0.49 | 3000 | 2.7743 |
| 2.6391 | 0.57 | 3500 | 2.5023 |
| 2.4277 | 0.65 | 4000 | 2.5465 |
| 2.4242 | 0.73 | 4500 | 2.4709 |
| 2.3978 | 0.82 | 5000 | 2.4019 |
| 2.2653 | 0.9 | 5500 | 2.3383 |
| 2.3916 | 0.98 | 6000 | 2.4765 |
| 1.9423 | 1.06 | 6500 | 2.3798 |
| 1.8538 | 1.14 | 7000 | 2.3628 |
| 1.8136 | 1.22 | 7500 | 2.3671 |
| 1.7808 | 1.31 | 8000 | 2.5585 |
| 1.7772 | 1.39 | 8500 | 2.5862 |
| 1.755 | 1.47 | 9000 | 2.3105 |
| 1.6529 | 1.55 | 9500 | 2.2417 |
| 1.6956 | 1.63 | 10000 | 2.1755 |
| 1.5713 | 1.71 | 10500 | 2.2917 |
| 1.565 | 1.79 | 11000 | 2.0838 |
| 1.615 | 1.88 | 11500 | 2.2111 |
| 1.5249 | 1.96 | 12000 | 2.2207 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.0+cu101
- Datasets 1.15.1
- Tokenizers 0.10.3
|
tli8hf/unqover-bert-base-uncased-newsqa | c479a1b05c710946148a24e0373d7602a9cff824 | 2021-05-20T07:53:24.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-bert-base-uncased-newsqa | 18 | null | transformers | 8,819 | Entry not found |
trig/multiverse | b555c783b0abddfe3c2df713022a2c4348a006bf | 2021-08-29T18:05:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | trig | null | trig/multiverse | 18 | null | transformers | 8,820 | ---
tags:
- conversational
---
# chatbot using multiple shows |
tyqiangz/indobert-lite-large-p2-smsa | e1ca516d9e58ba32ebbf6164f928abca78e4974b | 2021-10-06T17:12:46.000Z | [
"pytorch",
"albert",
"text-classification",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] | text-classification | false | tyqiangz | null | tyqiangz/indobert-lite-large-p2-smsa | 18 | 1 | transformers | 8,821 | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: true
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase2 - uncased) Finetuned on IndoNLU SmSA dataset
Finetuned the IndoBERT-Lite Large Model (phase2 - uncased) model on the IndoNLU SmSA dataset following the procedues stated in the paper [IndoNLU: Benchmark and Resources for Evaluating Indonesian
Natural Language Understanding](https://arxiv.org/pdf/2009.05387.pdf).
## How to use
```python
from transformers import pipeline
classifier = pipeline("text-classification",
model='tyqiangz/indobert-lite-large-p2-smsa',
return_all_scores=True)
text = "Penyakit koronavirus 2019"
prediction = classifier(text)
prediction
"""
Output:
[[{'label': 'positive', 'score': 0.0006000096909701824},
{'label': 'neutral', 'score': 0.01223431620746851},
{'label': 'negative', 'score': 0.987165629863739}]]
"""
```
**Finetuning hyperparameters:**
- learning rate: 2e-5
- batch size: 16
- no. of epochs: 5
- max sequence length: 512
- random seed: 42
**Classes:**
- 0: positive
- 1: neutral
- 2: negative
**Performance metrics on SmSA validation dataset**
- Validation accuracy: 0.94
- Validation F1: 0.91
- Validation Recall: 0.91
- Validation Precision: 0.93
|
uclanlp/plbart-multi_task-strong | b958e874bf2ab98f2f62ce449e3a13013605580c | 2022-03-02T07:42:23.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-strong | 18 | null | transformers | 8,822 | Entry not found |
vasudevgupta/mbart-summarizer-interiit | 8e2bfd5ac2e731bd0d1274735c9bfbaa62c0a86a | 2021-03-28T17:49:15.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vasudevgupta | null | vasudevgupta/mbart-summarizer-interiit | 18 | null | transformers | 8,823 | This model is trained as a part of **InterIIT'21 competition**, on the dataset provided by Bridgei2i. It is able to do multilingual (Hindi, English, Hinglish) summarization (many -> one) & is capable of generating summaries in English regardless of the input language.
| Rouge-L | Sacrebleu | Headline Similarity (using sentence-transformers) |
|-----------------------|-----------|---------------------------------------------------|
| p=0.46 r=0.49 f1=0.52 | 23.46 | 0.75 |
mBART is initialized from **facebook/mbart-large-cc25** and is trained as per strategy mentioned in our [GitHub](https://github.com/vasudevgupta7/Bridgei2i-Winning-Solutions). |
vishnun/bert-base-cased-tamil-mix-sentiment | 940036b33e6732512ee1474a3a5eb5c1aca02aee | 2021-08-14T09:51:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vishnun | null | vishnun/bert-base-cased-tamil-mix-sentiment | 18 | null | transformers | 8,824 | # Tamil Mix Sentiment analysis
Model is trained on tamil-mix-sentiment dataset and finetuned with backend as bert-base-cased model
## Inference usage
On the hosted Inference type in the text for which you want to classify.
Eg: Super a iruku bro intha work, vera level mass |
vwoloszyn/gtp2-email | 7218e48862ce6fed78e94f41195a32ea494fe12c | 2022-02-08T00:24:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | vwoloszyn | null | vwoloszyn/gtp2-email | 18 | null | transformers | 8,825 | Entry not found |
w11wo/indo-roberta-small | 9cb35a1ae4b311b4fc09348c2f84ceda5fe47605 | 2021-05-20T23:08:29.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"id",
"dataset:wikipedia",
"arxiv:1907.11692",
"transformers",
"indo-roberta-small",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | w11wo | null | w11wo/indo-roberta-small | 18 | null | transformers | 8,826 | ---
language: id
tags:
- indo-roberta-small
license: mit
datasets:
- wikipedia
widget:
- text: "Karena pandemi ini, kita harus <mask> di rumah saja."
---
## Indo RoBERTa Small
Indo RoBERTa Small is a masked language model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on the latest (late December 2020) Indonesian Wikipedia articles.
The model was trained from scratch and achieved a perplexity of 48.27 on the validation dataset (20% of the articles). Many of the techniques used
are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), where Sylvain Gugger fine-tuned a [DistilGPT-2](https://huggingface.co/distilgpt2) on [Wikitext2](https://render.githubusercontent.com/view/ipynb?color_mode=dark&commit=43d63e390e8a82f7ae49aa1a877419343a213cb4&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f68756767696e67666163652f6e6f7465626f6f6b732f343364363365333930653861383266376165343961613161383737343139333433613231336362342f6578616d706c65732f6c616e67756167655f6d6f64656c696e672e6970796e62&nwo=huggingface%2Fnotebooks&path=examples%2Flanguage_modeling.ipynb&repository_id=272452525&repository_type=Repository).
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base RoBERTa model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------|---------|----------|---------------------------------------|
| `indo-roberta-small` | 84M | RoBERTa | Indonesian Wikipedia (3.1 GB of text) |
## Evaluation Results
The model was trained for 3 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 4.071 | 3.876 | 48.27 | 3:40:55 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/indo-roberta-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi sedang <mask> di sekolah.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/indo-roberta-small"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi sedang berada di sekolah."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Indo RoBERTa Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner | c49a532d3ae3a22509e769e5f3fd045a577856fc | 2021-05-20T09:07:16.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner | 18 | null | transformers | 8,827 | Entry not found |
yhavinga/t5-v1.1-large-dutch-cnn-test | 537a589a88f69a43f55ba0bf43ae09ea4cc6a559 | 2022-01-16T13:26:39.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"dataset:ml6team/cnn_dailymail_nl",
"transformers",
"seq2seq",
"lm-head",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yhavinga | null | yhavinga/t5-v1.1-large-dutch-cnn-test | 18 | null | transformers | 8,828 | ---
language:
- nl
datasets:
- yhavinga/mc4_nl_cleaned
- ml6team/cnn_dailymail_nl
tags:
- seq2seq
- lm-head
license: apache-2.0
inference: false
---
# T5 v1.1 Large finetuned for CNN news summarization in Dutch 🇳🇱
This model is [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) finetuned on [CNN Dailymail NL](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for
the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
Rouge scores for this model are listed below.
## Tokenizer
* SentencePiece tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface
Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
## Dataset
All models listed below are trained on of the `full` configuration (39B tokens) of
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
## Models
TL;DR: [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) is the best model.
* `yhavinga/t5-base-dutch` is a re-training of the Dutch T5 base v1.0 model trained during the summer 2021
Flax/Jax community week. Accuracy was improved from 0.64 to 0.70.
* The two T5 v1.1 base models are an uncased and cased version of `t5-v1.1-base`, again pre-trained from scratch on Dutch,
with a tokenizer also trained from scratch. The t5 v1.1 models are slightly different from the t5 models, and the
base models are trained with a dropout of 0.0. For fine-tuning it is intended to set this back to 0.1.
* The large cased model is a pre-trained Dutch version of `t5-v1.1-large`. Training of t5-v1.1-large proved difficult.
Without dropout regularization, the training would diverge at a certain point. With dropout training went better,
be it much slower than training the t5-model. At some point convergance was too slow to warrant further training.
The latest checkpoint, training scripts and metrics are available for reference. For actual fine-tuning the cased
base model is probably the better choice.
| | model | train seq len | acc | loss | batch size | epochs | steps | dropout | optim | lr | duration |
|---------------------------------------------------------------------------------------------------|---------|---------------|----------|----------|------------|--------|---------|---------|-----------|------|----------|
| [yhavinga/t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | T5 | 512 | 0,70 | 1,38 | 128 | 1 | 528481 | 0.1 | adafactor | 5e-3 | 2d 9h |
| [yhavinga/t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | t5-v1.1 | 1024 | 0,73 | 1,20 | 64 | 2 | 1014525 | 0.0 | adafactor | 5e-3 | 5d 5h |
| [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | t5-v1.1 | 1024 | **0,78** | **0,96** | 64 | 2 | 1210000 | 0.0 | adafactor | 5e-3 | 6d 6h |
| [yhavinga/t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | t5-v1.1 | 512 | 0,76 | 1,07 | 64 | 1 | 1120000 | 0.1 | adafactor | 5e-3 | 86 13h |
The cased t5-v1.1 Dutch models were fine-tuned on summarizing the CNN Daily Mail dataset.
| | model | input len | target len | Rouge1 | Rouge2 | RougeL | RougeLsum | Test Gen Len | epochs | batch size | steps | duration |
|-------------------------------------------------------------------------------------------------------|---------|-----------|------------|--------|--------|--------|-----------|--------------|--------|------------|-------|----------|
| [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) | t5-v1.1 | 1024 | 96 | 34,8 | 13,6 | 25,2 | 32,1 | 79 | 6 | 64 | 26916 | 2h 40m |
| [yhavinga/t5-v1.1-large-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cnn-test) | t5-v1.1 | 1024 | 96 | 34,4 | 13,6 | 25,3 | 31,7 | 81 | 5 | 16 | 89720 | 11h |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also
instrumental in many, if not all parts of the training. The following repositories where helpful in setting up the TPU-VM,
and training the models:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) |
youzanai/bert-shipping-address-chinese | d6c470ee787ed9cb95f20c535e214d4977a30b12 | 2022-03-21T02:43:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | youzanai | null | youzanai/bert-shipping-address-chinese | 18 | null | transformers | 8,829 | ---
license: apache-2.0
---
基于有赞客户收货地址语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark |
Davlan/xlm-roberta-base-finetuned-zulu | d6750eceb456ed59716e82cb9f988cd22b1d62a8 | 2022-02-25T14:50:25.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-zulu | 18 | null | transformers | 8,830 | Entry not found |
cnicu/pegasus-large-booksum | f3238accc4b91cd60ba7595c1757fc82707de2ff | 2022-02-28T12:12:37.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | cnicu | null | cnicu/pegasus-large-booksum | 18 | null | transformers | 8,831 | ---
license: mit
tags:
- summarization
datasets:
- kmfoda/booksum
---
|
ghadeermobasher/Model_org_2 | 871cf28d066b36524a0eec5828939633409974af | 2022-03-02T10:06:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Model_org_2 | 18 | null | transformers | 8,832 | Entry not found |
davanstrien/flyswot_test | adcf0d50a8b79ab90ca5ac72f80b11e133c19bb1 | 2022-03-01T18:06:33.000Z | [
"pytorch",
"convnext",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | davanstrien | null | davanstrien/flyswot_test | 18 | null | transformers | 8,833 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: flyswot_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flyswot_test
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1518
- eval_f1: 0.9595
- eval_runtime: 5.9337
- eval_samples_per_second: 69.603
- eval_steps_per_second: 2.191
- epoch: 7.0
- step: 364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
abdelhalim/Shower_Sound_Recognition | 9da22aa51599aad82ff6082fb1f84d230e38a029 | 2022-03-03T22:09:48.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"dataset:SHD-2",
"transformers",
"audio",
"audio-classificaiton",
"shower detection"
] | audio-classification | false | abdelhalim | null | abdelhalim/Shower_Sound_Recognition | 18 | null | transformers | 8,834 | ---
datasets:
- SHD-2
tags:
- audio
- audio-classificaiton
- shower detection
metrics:
- Accuracy
---
**Context**
Most of our great brilliant ideas happen in periods of relaxation, like taking a
shower, however, once we leave the shower, we forget the brilliant idea. What if
we do not forget, and collect your ideas in the shower?
**What is the Shower Ideas concept?**
This is an app that detects when someone is taking a shower (douche) and asks
“do you have any idea?”, and the person will speak while taking the shower telling
the idea. And also will ask questions after taking a shower.
**Abstract about the model**
This model was trained based on *facebook/wav2vec2-base-960h* (which is a pretrained model on 960 hours of Librispeech on 16kHz sampled speech audio.) in order to classify the audio input into shower or no_shower.
**Dataset**
The SHD-2 dataset is a labeled collection of 2260 audio recordings of shower and no shower sounds.
The dataset consists of 6-second-long recordings organized into 2 classes (with 1130 examples per class).
# Usage
In order to use the model in your Python script just copy the following code:
```python
from transformers import pipeline
audio_input = 'example.wav'
classifier = pipeline("audio-classification", model="abdelhalim/Shower_Sound_Recognition")
labels = classifier(audio_input)
labels
``` |
drAbreu/bioBERT-NER-BC2GM_corpus | 99d3d7708b2b57d733a31fcb4347abc237c06a18 | 2022-03-15T14:44:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:bc2gm_corpus",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | drAbreu | null | drAbreu/bioBERT-NER-BC2GM_corpus | 18 | null | transformers | 8,835 | ---
tags:
- generated_from_trainer
datasets:
- bc2gm_corpus
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bioBERT-finrtuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bc2gm_corpus
type: bc2gm_corpus
args: bc2gm_corpus
metrics:
- name: Precision
type: precision
value: 0.7932528628907459
- name: Recall
type: recall
value: 0.8373080692584123
- name: F1
type: f1
value: 0.8146853146853147
- name: Accuracy
type: accuracy
value: 0.9750375532003672
widget:
- text: "JUP, AKT1, and AURKC are examples of genes"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bioBERT-finrtuned-ner
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the bc2gm_corpus dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0887
- Precision: 0.7933
- Recall: 0.8373
- F1: 0.8147
- Accuracy: 0.9750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0893 | 1.0 | 1563 | 0.0748 | 0.7447 | 0.8063 | 0.7743 | 0.9722 |
| 0.0507 | 2.0 | 3126 | 0.0773 | 0.7928 | 0.8275 | 0.8098 | 0.9739 |
| 0.0286 | 3.0 | 4689 | 0.0887 | 0.7933 | 0.8373 | 0.8147 | 0.9750 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
ndubuisi/pfam_init | 6fb5ba8b9a5291a8f4af05b050146671d2c31cc2 | 2022-03-09T06:20:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ndubuisi | null | ndubuisi/pfam_init | 18 | null | transformers | 8,836 | Entry not found |
ctu-aic/xlm-roberta-large-xnli-csfever | 9221e9e7a6a57f5e3d8fe20d1bcf4fa304f2c113 | 2022-03-11T12:30:17.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"license:cc-by-sa-3.0"
] | text-classification | false | ctu-aic | null | ctu-aic/xlm-roberta-large-xnli-csfever | 18 | 1 | transformers | 8,837 | ---
license: cc-by-sa-3.0
---
|
simonschoe/TransformationTransformer | 9acedf888cc699f04a35f1772cefb5facae3185d | 2022-07-28T15:04:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers"
] | text-classification | false | simonschoe | null | simonschoe/TransformationTransformer | 18 | null | transformers | 8,838 | ---
language:
- en
pipeline_tag: text-classification
tags:
widget:
- text: "And it was great to see how our Chinese team very much aware of that and of shifting all the resourcing to really tap into these opportunities."
example_title: "Examplary Transformation Sentence"
- text: "But we will continue to recruit even after that because we expect that the volumes are going to continue to grow."
example_title: "Examplary Non-Transformation Sentence"
- text: "So and again, we'll be disclosing the current taxes that are there in Guyana, along with that revenue adjustment."
example_title: "Examplary Non-Transformation Sentence"
---
# TransformationTransformer
**TransformationTransformer** is a fine-tuned [distilroberta](https://huggingface.co/distilroberta-base) model. It is trained and evaluated on 10,000 manually annotated sentences gleaned from the Q&A-section of quarterly earnings conference calls. In particular, it was trained on sentences issued by firm executives to discriminate between setnences that allude to **business transformation** vis-à-vis those that discuss topics other than business transformations. More details about the training procedure can be found [below](#model-training).
## Background
Context on the project.
## Usage
The model is intented to be used for sentence classification: It creates a contextual text representation from the input sentence and outputs a probability value. `LABEL_1` refers to a sentence that is predicted to contains transformation-related content (vice versa for `LABEL_0`). The query should consist of a single sentence.
## Usage (API)
```python
import json
import requests
API_TOKEN = <TOKEN>
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/simonschoe/call2vec"
def query(payload):
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
query({"inputs": "<insert-sentence-here>"})
```
## Usage (transformers)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("simonschoe/TransformationTransformer")
model = AutoModelForSequenceClassification.from_pretrained("simonschoe/TransformationTransformer")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer)
classifier('<insert-sentence-here>')
```
## Model Training
The model has been trained on text data stemming from earnings call transcripts. The data is restricted to a call's question-and-answer (Q&A) section and the remarks by firm executives. The data has been segmented into individual sentences using [`spacy`](https://spacy.io/).
**Statistics of Training Data:**
- Labeled sentences: 10,000
- Data distribution: xxx
- Inter-coder agreement: xxx
The following code snippets presents the training pipeline:
<link to script>
|
wanyu/IteraTeR-PEGASUS-Revision-Generator | 3e88c310f0f5d702bd1ba50e89eb07055d76f293 | 2022-04-04T20:08:12.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:IteraTeR_full_sent",
"arxiv:2203.03802",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | wanyu | null | wanyu/IteraTeR-PEGASUS-Revision-Generator | 18 | null | transformers | 8,839 | ---
datasets:
- IteraTeR_full_sent
---
# IteraTeR PEGASUS model
This model was obtained by fine-tuning [google/pegasus-large](https://huggingface.co/google/pegasus-large) on [IteraTeR-full-sent](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset.
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) <br>
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
## Text Revision Task
Given an edit intention and an original sentence, our model can generate a revised sentence.<br>
The edit intentions are provided by [IteraTeR-full-sent](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset, which are categorized as follows:
<table>
<tr>
<th>Edit Intention</th>
<th>Definition</th>
<th>Example</th>
</tr>
<tr>
<td>clarity</td>
<td>Make the text more formal, concise, readable and understandable.</td>
<td>
Original: It's like a house which anyone can enter in it. <br>
Revised: It's like a house which anyone can enter.
</td>
</tr>
<tr>
<td>fluency</td>
<td>Fix grammatical errors in the text.</td>
<td>
Original: In the same year he became the Fellow of the Royal Society. <br>
Revised: In the same year, he became the Fellow of the Royal Society.
</td>
</tr>
<tr>
<td>coherence</td>
<td>Make the text more cohesive, logically linked and consistent as a whole.</td>
<td>
Original: Achievements and awards Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy. <br>
Revised: Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy.
</td>
</tr>
<tr>
<td>style</td>
<td>Convey the writer’s writing preferences, including emotions, tone, voice, etc..</td>
<td>
Original: She was last seen on 2005-10-22. <br>
Revised: She was last seen on October 22, 2005.
</td>
</tr>
</table>
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("wanyu/IteraTeR-PEGASUS-Revision-Generator")
model = AutoModelForSeq2SeqLM.from_pretrained("wanyu/IteraTeR-PEGASUS-Revision-Generator")
before_input = '<fluency> I likes coffee.'
model_input = tokenizer(before_input, return_tensors='pt')
model_outputs = model.generate(**model_input, num_beams=8, max_length=1024)
after_text = tokenizer.batch_decode(model_outputs, skip_special_tokens=True)[0]
``` |
Helsinki-NLP/opus-mt-tc-big-en-fi | 160f657ed4985485d6e87b746a86e4382f67ef47 | 2022-06-01T13:10:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"fi",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-fi | 18 | null | transformers | 8,840 | ---
language:
- en
- fi
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-fi
results:
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: flores101-devtest
type: flores_101
args: eng fin devtest
metrics:
- name: BLEU
type: bleu
value: 27.6
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: newsdev2015
type: newsdev2015
args: eng-fin
metrics:
- name: BLEU
type: bleu
value: 24.2
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-fin
metrics:
- name: BLEU
type: bleu
value: 39.3
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: newstest2015
type: wmt-2015-news
args: eng-fin
metrics:
- name: BLEU
type: bleu
value: 26.4
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: newstest2016
type: wmt-2016-news
args: eng-fin
metrics:
- name: BLEU
type: bleu
value: 28.8
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: newstest2017
type: wmt-2017-news
args: eng-fin
metrics:
- name: BLEU
type: bleu
value: 31.3
- task:
name: Translation eng-fin
type: translation
args: eng-fin
dataset:
name: newstest2019
type: wmt-2019-news
args: eng-fin
metrics:
- name: BLEU
type: bleu
value: 26.4
---
# opus-mt-tc-big-en-fi
Neural machine translation model for translating from English (en) to Finnish (fi).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): eng
* target language(s): fin
* valid target language labels: >>fin<<
* model: transformer (big)
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fin/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT eng-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fin/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fin<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Russia is big.",
"Touch wood!"
]
model_name = "pytorch-models/opus-mt-tc-big-en-fi"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Venäjä on suuri.
# Kosketa puuta!
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-fi")
print(pipe("Russia is big."))
# expected output: Venäjä on suuri.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fin/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fin/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-fin | tatoeba-test-v2021-08-07 | 0.64352 | 39.3 | 10690 | 65122 |
| eng-fin | flores101-devtest | 0.61334 | 27.6 | 1012 | 18781 |
| eng-fin | newsdev2015 | 0.58367 | 24.2 | 1500 | 23091 |
| eng-fin | newstest2015 | 0.60080 | 26.4 | 1370 | 19735 |
| eng-fin | newstest2016 | 0.61636 | 28.8 | 3000 | 47678 |
| eng-fin | newstest2017 | 0.64381 | 31.3 | 3002 | 45269 |
| eng-fin | newstest2018 | 0.55626 | 19.7 | 3000 | 44836 |
| eng-fin | newstest2019 | 0.58420 | 26.4 | 1997 | 38369 |
| eng-fin | newstestB2016 | 0.57554 | 23.3 | 3000 | 45766 |
| eng-fin | newstestB2017 | 0.60212 | 26.8 | 3002 | 45506 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: f084bad
* port time: Tue Mar 22 14:42:32 EET 2022
* port machine: LM0-400-22516.local
|
efederici/sentence-it5-base | 73d3d9a749d4fbe85c54e501b334f9000a7f43cb | 2022-03-29T23:09:01.000Z | [
"pytorch",
"t5",
"it",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | efederici | null | efederici/sentence-it5-base | 18 | 2 | sentence-transformers | 8,841 | ---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base')
model = AutoModel.from_pretrained('efederici/sentence-IT5-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
morenolq/spotify-podcast-advertising-classification | 43e9bd006f0d401e5161434856a48a19c58bebbc | 2022-07-02T12:12:18.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Spotify Podcasts Dataset",
"transformers",
"classification"
] | text-classification | false | morenolq | null | morenolq/spotify-podcast-advertising-classification | 18 | 2 | transformers | 8,842 | ---
language: "en"
datasets:
- Spotify Podcasts Dataset
tags:
- bert
- classification
- pytorch
pipeline:
- text-classification
widget:
- text: "__START__ [SEP] This is the first podcast on natural language processing applied to spoken language."
- text: "This is the first podcast on natural language processing applied to spoken language. [SEP] You can find us on https://twitter.com/PodcastExampleClassifier."
- text: "You can find us on https://twitter.com/PodcastExampleClassifier. [SEP] You can also subscribe to our newsletter https://newsletter.com/PodcastExampleClassifier."
---
**General Information**
This is a `bert-base-cased`, binary classification model, fine-tuned to classify a given sentence as containing advertising content or not. It leverages previous-sentence context to make more accurate predictions.
The model is used in the paper 'Leveraging multimodal content for podcast summarization' published at ACM SAC 2022.
**Usage:**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('morenolq/spotify-podcast-advertising-classification')
tokenizer = AutoTokenizer.from_pretrained('morenolq/spotify-podcast-advertising-classification')
desc_sentences = ["Sentence 1", "Sentence 2", "Sentence 3"]
for i, s in enumerate(desc_sentences):
if i==0:
context = "__START__"
else:
context = desc_sentences[i-1]
out = tokenizer(context, text, padding = "max_length",
max_length = 256,
truncation=True,
return_attention_mask=True,
return_tensors = 'pt')
outputs = model(**out)
print (f"{s},{outputs}")
```
The manually annotated data, used for model fine-tuning are available [here](https://github.com/MorenoLaQuatra/MATeR/blob/main/description_sentences_classification.tsv)
Hereafter is the classification report of the model evaluation on the test split:
```
precision recall f1-score support
0 0.95 0.93 0.94 256
1 0.88 0.91 0.89 140
accuracy 0.92 396
macro avg 0.91 0.92 0.92 396
weighted avg 0.92 0.92 0.92 396
```
If you find it useful, please cite the following paper:
```bibtex
@inproceedings{10.1145/3477314.3507106,
author = {Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
title = {Leveraging Multimodal Content for Podcast Summarization},
year = {2022},
isbn = {9781450387132},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477314.3507106},
doi = {10.1145/3477314.3507106},
booktitle = {Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing},
pages = {863–870},
numpages = {8},
keywords = {multimodal learning, multimodal features fusion, extractive summarization, deep learning, podcast summarization},
location = {Virtual Event},
series = {SAC '22}
}
``` |
AnonymousSub/roberta_FT_new_newsqa | 226a14e7e40e5141b3bcf6a7f94b216645990755 | 2022-04-05T15:12:55.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/roberta_FT_new_newsqa | 18 | null | transformers | 8,843 | Entry not found |
vachevkd/qna-t5base-squad | 71d22699f1562d48d6841577be0d0dc656249162 | 2022-04-06T18:23:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vachevkd | null | vachevkd/qna-t5base-squad | 18 | null | transformers | 8,844 | Entry not found |
vachevkd/dg-t5base-race | be3fe37c79b377c9616735013c04859012fbbfe0 | 2022-04-06T18:30:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vachevkd | null | vachevkd/dg-t5base-race | 18 | null | transformers | 8,845 | Entry not found |
ydshieh/tiny-random-gptj-for-causal-lm | f64f714d1334967753f62f401bb54e6aa8577e1d | 2022-04-08T10:20:49.000Z | [
"pytorch",
"tf",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | ydshieh | null | ydshieh/tiny-random-gptj-for-causal-lm | 18 | null | transformers | 8,846 | Entry not found |
agdsga/chinese-pert-large-finetuned-product | b838e495b16be9d00b976d5e688aed12a27d9c73 | 2022-04-12T11:42:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index"
] | text-generation | false | agdsga | null | agdsga/chinese-pert-large-finetuned-product | 18 | null | transformers | 8,847 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: chinese-pert-large-finetuned-product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-pert-large-finetuned-product
This model is a fine-tuned version of [hfl/chinese-pert-large](https://huggingface.co/hfl/chinese-pert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0545 | 1.0 | 3237 | 0.0532 |
| 0.0451 | 2.0 | 6474 | 0.0465 |
| 0.0414 | 3.0 | 9711 | 0.0439 |
| 0.0198 | 4.0 | 12948 | 0.0220 |
| 0.0191 | 5.0 | 16185 | 0.0217 |
| 0.0188 | 6.0 | 19422 | 0.0215 |
| 0.0185 | 7.0 | 22659 | 0.0212 |
| 0.0183 | 8.0 | 25896 | 0.0209 |
| 0.0181 | 9.0 | 29133 | 0.0208 |
| 0.018 | 10.0 | 32370 | 0.0208 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nielsr/convnext-tiny-224-finetuned-eurosat-albumentations | 5aac61b2ae3092a51c276a26fa85dbc2ef29dd70 | 2022-04-12T12:40:48.000Z | [
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nielsr | null | nielsr/convnext-tiny-224-finetuned-eurosat-albumentations | 18 | null | transformers | 8,848 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-eurosat-albumentations
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9748148148148148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0727
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.141 | 1.0 | 190 | 0.1496 | 0.9544 |
| 0.0736 | 2.0 | 380 | 0.0958 | 0.9719 |
| 0.0568 | 3.0 | 570 | 0.0727 | 0.9748 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Wanjiru/bert-base-multilingual_en_ner_ | 40da7ea7287bf8b404b27dd86e55285513008be6 | 2022-04-14T12:33:55.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Wanjiru | null | Wanjiru/bert-base-multilingual_en_ner_ | 18 | 1 | transformers | 8,849 | Label ID Label Name
0 0
1. B-PER
2. I-PER
3. B-ORG
4. I-ORG
5. B-LOC
6. I-LOC |
rmihaylov/bert-base-ner-theseus-bg | 7a790473402b50e72e29f9b65099ce397de7ac7b | 2022-04-16T19:43:53.000Z | [
"pytorch",
"bert",
"token-classification",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:1810.04805",
"arxiv:2002.02925",
"transformers",
"torch",
"license:mit",
"autotrain_compatible"
] | token-classification | false | rmihaylov | null | rmihaylov/bert-base-ner-theseus-bg | 18 | null | transformers | 8,850 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# BERT BASE (cased) finetuned on Bulgarian named-entity-recognition data
Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
It was finetuned on public named-entity-recognition Bulgarian data.
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import pipeline
>>>
>>> model = pipeline(
>>> 'ner',
>>> model='rmihaylov/bert-base-ner-theseus-bg',
>>> tokenizer='rmihaylov/bert-base-ner-theseus-bg',
>>> device=0,
>>> revision=None)
>>> output = model('Здравей, аз се казвам Иван.')
>>> print(output)
[{'end': 26,
'entity': 'B-PER',
'index': 6,
'score': 0.9937722,
'start': 21,
'word': '▁Иван'}]
```
|
Souvikcmsa/Roberta_Sentiment_Analysis | e3cf03e5e9636fbcee84e97ab89a74f20f2ef773 | 2022-04-20T08:53:33.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Souvikcmsa/autotrain-data-sentimentAnalysis_By_Souvik",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Souvikcmsa | null | Souvikcmsa/Roberta_Sentiment_Analysis | 18 | null | transformers | 8,851 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Souvikcmsa/autotrain-data-sentimentAnalysis_By_Souvik
co2_eq_emissions: 4.453029772491864
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 762623422
- CO2 Emissions (in grams): 4.453029772491864
## Validation Metrics
- Loss: 0.40843138098716736
- Accuracy: 0.8302828618968386
- Macro F1: 0.8302447939743022
- Micro F1: 0.8302828618968385
- Weighted F1: 0.8302151855901072
- Macro Precision: 0.8310980209442669
- Micro Precision: 0.8302828618968386
- Weighted Precision: 0.8313262654775467
- Macro Recall: 0.8305699539252172
- Micro Recall: 0.8302828618968386
- Weighted Recall: 0.8302828618968386
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentimentAnalysis_By_Souvik-762623422", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Zia/distilbert-base-uncased-finetuned-emotion | 57931d0b1cedcdf6373f68c78bdcf24522d6f6d5 | 2022-04-24T17:48:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Zia | null | Zia/distilbert-base-uncased-finetuned-emotion | 18 | null | transformers | 8,852 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9365
- name: F1
type: f1
value: 0.9366968648795959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1707
- Accuracy: 0.9365
- F1: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0746 | 1.0 | 250 | 0.1932 | 0.9335 | 0.9330 |
| 0.0565 | 2.0 | 500 | 0.1774 | 0.939 | 0.9391 |
| 0.0539 | 3.0 | 750 | 0.1707 | 0.9365 | 0.9367 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yhavinga/t5-small-24L-ccmatrix-multi | b9a8c9c56920570a39de96831255c91ece6c8a40 | 2022-06-14T10:29:41.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"nl",
"en",
"dataset:yhavinga/mc4_nl_cleaned",
"dataset:yhavinga/ccmatrix",
"transformers",
"translation",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | yhavinga | null | yhavinga/t5-small-24L-ccmatrix-multi | 18 | null | transformers | 8,853 | ---
language:
- nl
- en
datasets:
- yhavinga/mc4_nl_cleaned
- yhavinga/ccmatrix
tags:
- t5
- translation
- seq2seq
pipeline_tag: translation
widget:
- text: "It is a painful and tragic spectacle that rises before me: I have drawn back the curtain from the rottenness of man. This word, in my mouth, is at least free from one suspicion: that it involves a moral accusation against humanity."
- text: "For once Fletcher’s sedate features showed a certain lightness. 'I believe I will linger awhile longer.' He indicated a holoscreen which was displaying the image from an external camera. Cloud-splattered landscape was rolling past, pastel greens, browns, and blues illuminated by Duke’s radiance. 'It is not often a mortal man is permitted to view a world over the shoulder of angels.'"
license: apache-2.0
---
# t5-small-24L-ccmatrix-multi
A [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset.
Evaluation metrics of this model are listed in the **Translation models** section below.
You can use this model directly with a pipeline for text translation:
```python
model_name = "yhavinga/t5-small-24L-ccmatrix-multi"
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
from transformers import pipeline
import torch
device_num = 0 if torch.cuda.is_available() else -1
device = "cpu" if device_num < 0 else f"cuda:{device_num}"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
params = {"max_length": 128, "num_beams": 4, "early_stopping": True}
en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num)
print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""",
**params)[0]['translation_text'])
nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num)
print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""",
**params)[0]['translation_text'])
```
This **t5 eff** model has **249M** parameters.
It was pre-trained with the masked language modeling objective on the dataset
`mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**,
with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens).
Pre-training evaluation loss and accuracy are **1,18** and **0,74**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been evaluated on summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

The next two sections provide more information on how the evaluation was performed.
## Evaluation on summarization
The models below have been evaluated for summarization on 50K samples from the CNN Dailymail dataset.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 1e-3 after a
warmup of 32 steps, with a label smoothing factor of 0.05. Article and summary token lengths were set to 1024 and 142.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
The numbers reported are the Rouge scores on 1000 documents from the test split. The rouge1 score is visualized in the
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
## Evaluation on translation
The models below have been evaluated for English to Dutch translation on 50K samples from the CCMatrix dataset.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 5e-5 after a
warmup of 32 steps, with a label smoothing factor of 0.1 and maximum sequence length of 128 tokens.
The numbers reported are the Bleu scores on 1000 documents from the test split.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
davidenam/distilbert-base-uncased-finetuned-emotion | 888eea940289f187a159db2ff86742f9e97203bc | 2022-04-27T21:59:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | davidenam | null | davidenam/distilbert-base-uncased-finetuned-emotion | 18 | null | transformers | 8,854 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9203318889648883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2230
- Accuracy: 0.9205
- F1: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3224 | 0.9055 | 0.9034 |
| No log | 2.0 | 500 | 0.2230 | 0.9205 | 0.9203 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
OFA-Sys/OFA-medium | 0f35145e94917f4954001fb8ac213dd626de1e72 | 2022-07-25T11:50:59.000Z | [
"pytorch",
"ofa",
"transformers",
"license:apache-2.0"
] | null | false | OFA-Sys | null | OFA-Sys/OFA-medium | 18 | 3 | transformers | 8,855 | ---
license: apache-2.0
---
# OFA-medium
This is the **medium** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-medium
```
After, refer the path to OFA-medium to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 256
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
>>> # using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
>>> # using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
Truefilter/bbase_go_emotions | 0a80b3900c5344f15f02bbfff149ad8751b3a4f3 | 2022-04-29T15:31:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Truefilter | null | Truefilter/bbase_go_emotions | 18 | null | transformers | 8,856 | Entry not found |
anshr/distilgpt2_supervised_model_final | a900c56c19bb7915f875bde78759c8e3718bfff8 | 2022-05-02T22:15:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | anshr | null | anshr/distilgpt2_supervised_model_final | 18 | null | transformers | 8,857 | Entry not found |
enimai/mbart-large-50-paraphrase-finetuned-for-fr | dea9e2d720c1c1841a19b1d30262ca061a532219 | 2022-05-03T17:36:09.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | enimai | null | enimai/mbart-large-50-paraphrase-finetuned-for-fr | 18 | null | transformers | 8,858 | ---
license: apache-2.0
---
|
jeremyccollinsmpi/autotrain-inference_probability_2-840226804 | 0f58601ed0d3f53339cfffd5f8551a554c2494f8 | 2022-05-17T07:41:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:jeremyccollinsmpi/autotrain-data-inference_probability_2",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | jeremyccollinsmpi | null | jeremyccollinsmpi/autotrain-inference_probability_2-840226804 | 18 | null | transformers | 8,859 |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- jeremyccollinsmpi/autotrain-data-inference_probability_2
co2_eq_emissions: 0.02920886926438328
---
# Description
The input structure is:
summarize: [text]. hypothesis: [hypothesis] , and the output is 0 (hypothesis is not supported) or 1 (hypothesis is supported).
This tests whether a hypothesis is true given the preceding text. Currently the model is trained on banking chatbot intent data, such as:
summarize: How old do my kids need to be to use your service?. hypothesis: asking about an age limit
Output: 1
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 840226804
- CO2 Emissions (in grams): 0.02920886926438328
## Validation Metrics
- Loss: 0.09617297351360321
- Rouge1: 91.2874
- Rouge2: 0.0
- RougeL: 91.2874
- RougeLsum: 91.4174
- Gen Len: 2.4915
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jeremyccollinsmpi/autotrain-inference_probability_2-840226804
``` |
dpuccine/bert-finetuned-ner | c3c12a639d0f92c8385161f48d0a56cf6c007ff0 | 2022-05-10T17:29:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | dpuccine | null | dpuccine/bert-finetuned-ner | 18 | null | transformers | 8,860 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9323407775020678
- name: Recall
type: recall
value: 0.9485021878155503
- name: F1
type: f1
value: 0.9403520480520563
- name: Accuracy
type: accuracy
value: 0.9859304173779949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0624
- Precision: 0.9323
- Recall: 0.9485
- F1: 0.9404
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.087 | 1.0 | 1756 | 0.0696 | 0.9183 | 0.9406 | 0.9293 | 0.9832 |
| 0.0378 | 2.0 | 3512 | 0.0564 | 0.9355 | 0.9502 | 0.9428 | 0.9863 |
| 0.0194 | 3.0 | 5268 | 0.0624 | 0.9323 | 0.9485 | 0.9404 | 0.9859 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_77 | acb48ae7cda063c5e2c789afb64767aaacc51814 | 2022-05-11T01:22:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_77 | 18 | null | transformers | 8,861 | Entry not found |
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_77 | 8804ed8fea1fad03270c0ce8ed3cda9d3af8da9b | 2022-05-11T01:39:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_77 | 18 | null | transformers | 8,862 | Entry not found |
James-kc-min/F_Roberta_classifier2 | ffda557d70a9139a57f8aeb44d08eea669de586c | 2022-05-11T14:15:01.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | James-kc-min | null | James-kc-min/F_Roberta_classifier2 | 18 | null | transformers | 8,863 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: F_Roberta_classifier2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# F_Roberta_classifier2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1317
- Accuracy: 0.9751
- F1: 0.9751
- Precision: 0.9751
- Recall: 0.9751
- C Report: precision recall f1-score support
0 0.97 0.98 0.98 1467
1 0.98 0.97 0.98 1466
accuracy 0.98 2933
macro avg 0.98 0.98 0.98 2933
weighted avg 0.98 0.98 0.98 2933
- C Matrix: None
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | C Report | C Matrix |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|
| 0.1626 | 1.0 | 614 | 0.0936 | 0.9707 | 0.9707 | 0.9707 | 0.9707 | precision recall f1-score support
0 0.97 0.97 0.97 1467
1 0.97 0.97 0.97 1466
accuracy 0.97 2933
macro avg 0.97 0.97 0.97 2933
weighted avg 0.97 0.97 0.97 2933
| None |
| 0.0827 | 2.0 | 1228 | 0.0794 | 0.9731 | 0.9731 | 0.9731 | 0.9731 | precision recall f1-score support
0 0.96 0.98 0.97 1467
1 0.98 0.96 0.97 1466
accuracy 0.97 2933
macro avg 0.97 0.97 0.97 2933
weighted avg 0.97 0.97 0.97 2933
| None |
| 0.0525 | 3.0 | 1842 | 0.1003 | 0.9737 | 0.9737 | 0.9737 | 0.9737 | precision recall f1-score support
0 0.97 0.98 0.97 1467
1 0.98 0.97 0.97 1466
accuracy 0.97 2933
macro avg 0.97 0.97 0.97 2933
weighted avg 0.97 0.97 0.97 2933
| None |
| 0.0329 | 4.0 | 2456 | 0.1184 | 0.9751 | 0.9751 | 0.9751 | 0.9751 | precision recall f1-score support
0 0.98 0.97 0.98 1467
1 0.97 0.98 0.98 1466
accuracy 0.98 2933
macro avg 0.98 0.98 0.98 2933
weighted avg 0.98 0.98 0.98 2933
| None |
| 0.0179 | 5.0 | 3070 | 0.1317 | 0.9751 | 0.9751 | 0.9751 | 0.9751 | precision recall f1-score support
0 0.97 0.98 0.98 1467
1 0.98 0.97 0.98 1466
accuracy 0.98 2933
macro avg 0.98 0.98 0.98 2933
weighted avg 0.98 0.98 0.98 2933
| None |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
edumunozsala/bertin_base_sentiment_analysis_es | 159a628aa01ee6d5930752ac632de7327ab3fa38 | 2022-07-29T09:18:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"dataset:IMDbreviews_es",
"transformers",
"sagemaker",
"bertin",
"TextClassification",
"SentimentAnalysis",
"license:apache-2.0",
"model-index"
] | text-classification | false | edumunozsala | null | edumunozsala/bertin_base_sentiment_analysis_es | 18 | null | transformers | 8,864 | ---
language: es
tags:
- sagemaker
- bertin
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: bertin_base_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
metrics:
- name: Accuracy
type: accuracy
value: 0.898933
- name: F1 Score
type: f1
value: 0.8989063
- name: Precision
type: precision
value: 0.8771473
- name: Recall
type: recall
value: 0.9217724
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
# Model bertin_base_sentiment_analysis_es
## **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **Bertin base** which is a RoBERTa-base model pre-trained on the Spanish portion of mC4 using Flax.
It was trained by the Bertin Project.[Link to base model](https://huggingface.co/bertin-project/bertin-roberta-base-spanish)
Article: BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling
- Author = Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury,
- journal = Procesamiento del Lenguaje Natural,
- volume = 68, number = 0, year = 2022
- url = http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Intended uses & limitations
This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews.
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"bertin-project/bertin-roberta-base-spanish\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
- Accuracy = 0.8989333333333334
- F1 Score = 0.8989063750333421
- Precision = 0.877147319104633
- Recall = 0.9217724288840262
## Test results
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/bertin_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/bertin_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
xhyi/CodeGen-2B-Multi | bf4b5c321dd655e9714fe284bf07c2be01fd93aa | 2022-05-18T17:33:15.000Z | [
"pytorch",
"codegen",
"text-generation",
"en",
"transformers",
"text generation",
"causal-lm",
"license:bsd-3-clause"
] | text-generation | false | xhyi | null | xhyi/CodeGen-2B-Multi | 18 | null | transformers | 8,865 | ---
language:
- en
tags:
- codegen
- text generation
- pytorch
- causal-lm
license: bsd-3-clause
---
# Salesforce CodeGen
ported salesforce codegen models to work on huggingface transformers without any extra code (the model specific code is bundled)
## Overview
The CodeGen model was proposed in by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. From Salesforce Research.
The abstract from the paper is the following: Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We plan to make the training library JaxFormer including checkpoints available as open source.
## Usage
`trust_remote_code` is needed because the [torch modules](https://github.com/salesforce/CodeGen/tree/main/jaxformer/hf/codegen) for the custom codegen model is bundled.
```sh
from transformers import AutoModelForCausalLM, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(model_folder, local_files_only=True)
model = AutoModelForCausalLM.from_pretrained(model_folder, local_files_only=True, trust_remote_code=True)
``` |
steysie/paraphrase-multilingual-mpnet-base-v2-tuned-smartcat | 35dce5fafed492a692d9bd072d7953a5d7fdfc00 | 2022-05-20T20:10:09.000Z | [
"pytorch",
"xlm-roberta",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | steysie | null | steysie/paraphrase-multilingual-mpnet-base-v2-tuned-smartcat | 18 | null | transformers | 8,866 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: paraphrase-multilingual-mpnet-base-v2-tuned-smartcat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphrase-multilingual-mpnet-base-v2-tuned-smartcat
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.0072 | 0.16 | 10000 | 0.0025 |
| 0.0014 | 0.32 | 20000 | 0.0005 |
| 0.0004 | 0.48 | 30000 | 0.0002 |
| 0.0002 | 0.64 | 40000 | 0.0001 |
| 0.0003 | 0.81 | 50000 | 0.0001 |
| 0.0002 | 0.97 | 60000 | 0.0000 |
| 0.0001 | 1.13 | 70000 | 0.0000 |
| 0.0001 | 1.29 | 80000 | 0.0000 |
| 0.0001 | 1.45 | 90000 | 0.0000 |
| 0.0001 | 1.61 | 100000 | 0.0000 |
| 0.0 | 1.77 | 110000 | 0.0000 |
| 0.0 | 1.93 | 120000 | 0.0000 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
imohammad12/GRS-Grammar-Checker-DeBerta | 5dd540d9a686c056e6c8e520a34ccdc929a547da | 2022-05-26T10:48:39.000Z | [
"pytorch",
"deberta",
"text-classification",
"en",
"transformers",
"grs"
] | text-classification | false | imohammad12 | null | imohammad12/GRS-Grammar-Checker-DeBerta | 18 | null | transformers | 8,867 | ---
language: en
tags: grs
---
## Citation
Please star the [GRS GitHub repo](https://github.com/imohammad12/GRS) and cite the paper if you found our model useful:
```
@inproceedings{dehghan-etal-2022-grs,
title = "{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification",
author = "Dehghan, Mohammad and
Kumar, Dhruv and
Golab, Lukasz",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.77",
pages = "949--960",
abstract = "We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.",
}
``` |
gigikenneth/family-guy-bot | d02b801f8d9a48ae1d6342466a41a39a8c501ac0 | 2022-05-26T19:44:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | gigikenneth | null | gigikenneth/family-guy-bot | 18 | null | transformers | 8,868 | ---
tags:
- conversational
---
# Stewie Chatbot |
RANG012/SENATOR | 64670b5d0bd1fbdea79a55e29b8ab405e742bd41 | 2022-06-01T07:17:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | RANG012 | null | RANG012/SENATOR | 18 | null | transformers | 8,869 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: SENATOR
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.916
- name: F1
type: f1
value: 0.9166666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SENATOR
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2707
- Accuracy: 0.916
- F1: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Yarn/distilbert-base-uncased-mnli-finetuned-mnli | f7e6ca9e289817e2de1167156bcd735673af5285 | 2022-06-21T18:16:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Yarn | null | Yarn/distilbert-base-uncased-mnli-finetuned-mnli | 18 | null | transformers | 8,870 | Entry not found |
ghadeermobasher/Original-PubMedBERT-NCBI | 6c6f160510ee7ee986cadbfc4cbb59a67a9116fa | 2022-06-09T10:27:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-PubMedBERT-NCBI | 18 | null | transformers | 8,871 | Entry not found |
ghadeermobasher/Orignal-SciBERT-NCBI | 8d7878273baad8f4b60ecdd710658730ea91d36e | 2022-06-09T11:24:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Orignal-SciBERT-NCBI | 18 | null | transformers | 8,872 | Entry not found |
ghadeermobasher/Original-BlueBERT-BC5CDR-Disease | 97893864f2f0011b7bb6040d50953a940d068b3f | 2022-06-09T11:20:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-BC5CDR-Disease | 18 | null | transformers | 8,873 | Entry not found |
ghadeermobasher/Original-PubMedBERT-BC5CDR-disease | 9e601891312d968a56cbef491229c3d228953340 | 2022-06-09T11:29:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-PubMedBERT-BC5CDR-disease | 18 | null | transformers | 8,874 | Entry not found |
ghadeermobasher/Original-BlueBERT-BC5CDR-Chemical | 821bfe60dc11f695657a253fe401f6f9cebd7d38 | 2022-06-09T12:03:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-BC5CDR-Chemical | 18 | null | transformers | 8,875 | Entry not found |
ghadeermobasher/Original-PubMedBERT-BC5CDR-Chemical | dee786252092c1135c972467ff189207f49e92fc | 2022-06-09T11:55:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-PubMedBERT-BC5CDR-Chemical | 18 | null | transformers | 8,876 | Entry not found |
ghadeermobasher/Original-SciBERT-BC4CHEMD-O | d1eda4a4218237ca965a2206d22d24c5bed19a7c | 2022-06-09T14:06:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC4CHEMD-O | 18 | null | transformers | 8,877 | Entry not found |
ghadeermobasher/Original-PubMedBERT-Linnaeus | 9b32352012d1494fac69ccdab79f37daa4bdb6eb | 2022-06-10T11:13:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-PubMedBERT-Linnaeus | 18 | null | transformers | 8,878 | Entry not found |
Anjoe/german-poetry-gpt2-large | 25d1a886fbe54bebb32bd079e22fec42d7397327 | 2022-07-21T14:35:09.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Anjoe | null | Anjoe/german-poetry-gpt2-large | 18 | null | transformers | 8,879 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: german-poetry-gpt2-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-poetry-gpt2-large
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on German poems.
It achieves the following results on the evaluation set:
- eval_loss: 3.5753
- eval_runtime: 100.7173
- eval_samples_per_second: 51.6
- eval_steps_per_second: 25.805
- epoch: 4.0
- step: 95544
## Model description
large version of gpt-2
## Intended uses & limitations
It could be used for poetry generation
## Training and evaluation data
The model was trained on german poems from projekt Gutenberg
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
speechbrain/asr-wav2vec2-dvoice-amharic | ee19134f21287dd4179087aa230547ebe0ad02fa | 2022-06-10T01:30:20.000Z | [
"wav2vec2",
"feature-extraction",
"dar",
"dataset:Dvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-dvoice-amharic | 18 | 1 | speechbrain | 8,880 | ---
language: "dar"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- Dvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Amharic (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) Amharic dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 6.71 | 25.50 | 6.57 | 24.92 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and is trained with the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install transformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Amharic)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-dvoice-amharic", savedir="pretrained_models/asr-wav2vec2-dvoice-amharic")
asr_model.transcribe_file('speechbrain/asr-wav2vec2-dvoice-amharic/example_amharic.wav')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/DVoice/ASR/CTC
python train_with_wav2vec2.py hparams/train_amh_with_wav2vec.yaml --data_folder=/localscratch/ALFFA_PUBLIC/ASR/AMHARIC/data/
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1vNT7RjRuELs7pumBHmfYsrOp9m46D0ym?usp=sharing).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# About DVoice
DVoice is a community initiative that aims to provide African low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrieved from social media. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola, and Soninke.
For this project, AIOX Labs and the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London, and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes, or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business-ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems, and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network, and System Security, and Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution.
|
AnyaSchen/rugpt3_pushkin | 109105f776fdec08d9eb7572a97ca6f4d92398e5 | 2022-06-15T11:25:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AnyaSchen | null | AnyaSchen/rugpt3_pushkin | 18 | null | transformers | 8,881 | This model was created by additional training of the giant GPT-3 medium on the works of A.S. Pushkin. Now this model can generate poetry in the style of this poet. Fine-tuning of GPT-3 was produced.
 |
rsuwaileh/IDRISI-LMR-HD-TB | 3510abf3da66d8f5529faffdf1c1caf720923985 | 2022-07-18T09:17:42.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | rsuwaileh | null | rsuwaileh/IDRISI-LMR-HD-TB | 18 | null | transformers | 8,882 | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training, development, and test data are used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-based LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition/)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
QCRI/bert-base-cased-ccg | 1019a0e7137e1ac936d702b9fd406736870848e2 | 2022-06-13T08:25:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | token-classification | false | QCRI | null | QCRI/bert-base-cased-ccg | 18 | null | transformers | 8,883 | ---
license: cc-by-nc-4.0
---
|
ghadeermobasher/BC4CHEMD-Chem-Modified-BlueBERT-384 | c6ecd2e051542a5f6038ae0fbb4678b892ccef5f | 2022-06-14T18:33:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Modified-BlueBERT-384 | 18 | null | transformers | 8,884 | Entry not found |
ahmeddbahaa/xlmroberta-finetune-en-cnn | b7b26302a1ad9b37274156df47ba67a328db3c16 | 2022-06-15T15:56:54.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"en",
"ecnoder-decoder",
"xlmroberta",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/xlmroberta-finetune-en-cnn | 18 | null | transformers | 8,885 | ---
tags:
- summarization
- en
- ecnoder-decoder
- xlmroberta
- Abstractive Summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: xlmroberta-finetune-en-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta-finetune-en-cnn
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
Bman/DialoGPT-medium-peppapig | 944854efd38f7fe9d8794c4c84ebbb593e75de90 | 2022-06-16T21:59:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Bman | null | Bman/DialoGPT-medium-peppapig | 18 | 1 | transformers | 8,886 | ---
tags:
- conversational
---
# Peppa Pig DialoGPT Model |
Mahmoud1816Yasser/tmp_trainer | 341e9ce2f8a3d2fe33e69074c9b2ca3f16f00c44 | 2022-06-17T21:10:28.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | audio-classification | false | Mahmoud1816Yasser | null | Mahmoud1816Yasser/tmp_trainer | 18 | null | transformers | 8,887 | ---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
langfab/distilbert-base-uncased-finetuned-movie-genre | ccccffc13708770efbe757a441061150084eb08f | 2022-06-18T19:02:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | langfab | null | langfab/distilbert-base-uncased-finetuned-movie-genre | 18 | null | transformers | 8,888 | Entry not found |
KoichiYasuoka/roberta-base-japanese-aozora-ud-head | 50f04d6d295a46a5f4797590798fd47f9dbac45b | 2022-07-20T03:52:15.000Z | [
"pytorch",
"roberta",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-japanese-aozora-ud-head | 18 | null | transformers | 8,889 | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# roberta-base-japanese-aozora-ud-head
## Model Description
This is a RoBERTa model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [roberta-base-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-char) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model)
print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/roberta-base-japanese-aozora-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
|
Zamachi/bert-base-for-multilabel-sentence-classification | 8b52a934c30c9d325322ab6771f0d04e96117457 | 2022-07-14T12:49:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Zamachi | null | Zamachi/bert-base-for-multilabel-sentence-classification | 18 | null | transformers | 8,890 | Entry not found |
ManqingLiu/distilbert-base-uncased-finetuned-emotion | 6fefcec9c6f2607ff45b73c11ca8803739f14d03 | 2022-06-24T06:04:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ManqingLiu | null | ManqingLiu/distilbert-base-uncased-finetuned-emotion | 18 | null | transformers | 8,891 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9306050612701778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Accuracy: 0.9305
- F1: 0.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1755 | 1.0 | 250 | 0.1831 | 0.925 | 0.9249 |
| 0.1118 | 2.0 | 500 | 0.1709 | 0.9305 | 0.9306 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
austinmw/distilbert-base-uncased-finetuned-health_facts | aba654497687b32f4ec38ca684b79d277a80fd3d | 2022-06-29T18:15:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:health_fact",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | austinmw | null | austinmw/distilbert-base-uncased-finetuned-health_facts | 18 | null | transformers | 8,892 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- health_fact
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-health_facts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: health_fact
type: health_fact
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.628500823723229
- name: F1
type: f1
value: 0.6544946803476833
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-health_facts
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the health_fact dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1227
- Accuracy: 0.6285
- F1: 0.6545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1367 | 1.0 | 154 | 0.9423 | 0.5560 | 0.6060 |
| 0.9444 | 2.0 | 308 | 0.9267 | 0.5733 | 0.6170 |
| 0.8248 | 3.0 | 462 | 0.9483 | 0.5832 | 0.6256 |
| 0.7213 | 4.0 | 616 | 1.0119 | 0.5815 | 0.6219 |
| 0.608 | 5.0 | 770 | 1.1227 | 0.6285 | 0.6545 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
andreaschandra/distilbert-base-uncased-finetuned-emotion | 2ef3e9ba1e9f63ae2050802469f67e0549376e93 | 2022-07-13T13:16:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | andreaschandra | null | andreaschandra/distilbert-base-uncased-finetuned-emotion | 18 | null | transformers | 8,893 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240890586429673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8218 | 1.0 | 250 | 0.3165 | 0.9025 | 0.9001 |
| 0.2494 | 2.0 | 500 | 0.2186 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
akhisreelibra/bert-malayalam-pos-tagger | ac3d00c95d7df32d0ead63bd00a7d18a63589554 | 2022-07-05T11:26:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | akhisreelibra | null | akhisreelibra/bert-malayalam-pos-tagger | 18 | null | transformers | 8,894 | |
naver/efficient-splade-VI-BT-large-query | 8d4ba56f900620a2ca3efdac9a028473bf703aea | 2022-07-08T13:12:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:ms_marco",
"transformers",
"splade",
"query-expansion",
"document-expansion",
"bag-of-words",
"passage-retrieval",
"knowledge-distillation",
"document encoder",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | naver | null | naver/efficient-splade-VI-BT-large-query | 18 | null | transformers | 8,895 | ---
license: cc-by-nc-sa-4.0
language: "en"
tags:
- splade
- query-expansion
- document-expansion
- bag-of-words
- passage-retrieval
- knowledge-distillation
- document encoder
datasets:
- ms_marco
---
## Efficient SPLADE
Efficient SPLADE model for passage retrieval. This architecture uses two distinct models for query and document inference. This is the **query** one, please also download the **doc** one (https://huggingface.co/naver/efficient-splade-VI-BT-large-doc). For additional details, please visit:
* paper: https://dl.acm.org/doi/10.1145/3477495.3531833
* code: https://github.com/naver/splade
| | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | Latency (PISA) ms | Latency (Inference) ms
| --- | --- | --- | --- | --- |
| `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3
| `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7
## Citation
If you use our checkpoint, please cite our work:
```
@inproceedings{10.1145/3477495.3531833,
author = {Lassance, Carlos and Clinchant, St\'{e}phane},
title = {An Efficiency Study for SPLADE Models},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531833},
doi = {10.1145/3477495.3531833},
abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2220–2226},
numpages = {7},
keywords = {splade, latency, information retrieval, sparse representations},
location = {Madrid, Spain},
series = {SIGIR '22}
}
```
|
annahaz/xlm-roberta-base-finetuned-misogyny-sexism | 93e1a9ad2ffa4bf7151a0b92d0a6d4287f79dfad | 2022-07-27T14:45:20.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | annahaz | null | annahaz/xlm-roberta-base-finetuned-misogyny-sexism | 18 | null | transformers | 8,896 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny-sexism
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny-sexism
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9064
- Accuracy: 0.8334
- F1: 0.3322
- Precision: 0.2498
- Recall: 0.4961
- Mae: 0.1666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3869 | 1.0 | 2395 | 0.2905 | 0.8778 | 0.3528 | 0.3164 | 0.3988 | 0.1222 |
| 0.3539 | 2.0 | 4790 | 0.4143 | 0.8278 | 0.3465 | 0.2536 | 0.5467 | 0.1722 |
| 0.3124 | 3.0 | 7185 | 0.3327 | 0.8568 | 0.3583 | 0.2864 | 0.4786 | 0.1432 |
| 0.2817 | 4.0 | 9580 | 0.5621 | 0.7329 | 0.3092 | 0.1972 | 0.7160 | 0.2671 |
| 0.2651 | 5.0 | 11975 | 0.4376 | 0.8520 | 0.3607 | 0.2821 | 0.5 | 0.1480 |
| 0.2249 | 6.0 | 14370 | 0.5581 | 0.8326 | 0.3312 | 0.2485 | 0.4961 | 0.1674 |
| 0.1958 | 7.0 | 16765 | 0.6728 | 0.8382 | 0.3234 | 0.2484 | 0.4630 | 0.1618 |
| 0.1899 | 8.0 | 19160 | 0.7404 | 0.8304 | 0.3316 | 0.2471 | 0.5039 | 0.1696 |
| 0.1619 | 9.0 | 21555 | 0.8309 | 0.8461 | 0.3382 | 0.2639 | 0.4708 | 0.1539 |
| 0.1453 | 10.0 | 23950 | 0.9064 | 0.8334 | 0.3322 | 0.2498 | 0.4961 | 0.1666 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Shredder/My_model | 0ef65cb9b1d4cb0c44e9f26b451247e082e648c0 | 2022-07-09T10:26:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Shredder | null | Shredder/My_model | 18 | null | transformers | 8,897 | Entry not found |
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1 | c8a4dda381aa3bcc92a37ae1b3545d203deb5f35 | 2022-07-19T03:23:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1 | 18 | null | transformers | 8,898 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5459
- Wer: 0.2463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 0.3909 | 1.0 | 2309 | 0.5615 | 0.2459 |
| 0.4094 | 2.0 | 4618 | 0.5654 | 0.2439 |
| 0.326 | 3.0 | 6927 | 0.5568 | 0.2470 |
| 0.4577 | 4.0 | 9236 | 0.5795 | 0.2474 |
| 0.3628 | 5.0 | 11545 | 0.5459 | 0.2463 |
| 0.3135 | 6.0 | 13854 | 0.5582 | 0.2473 |
| 0.5058 | 7.0 | 16163 | 0.5677 | 0.2439 |
| 0.3188 | 8.0 | 18472 | 0.5646 | 0.2445 |
| 0.3589 | 9.0 | 20781 | 0.5626 | 0.2479 |
| 0.4021 | 10.0 | 23090 | 0.5722 | 0.2452 |
| 0.4362 | 11.0 | 25399 | 0.5659 | 0.2431 |
| 0.3215 | 12.0 | 27708 | 0.5658 | 0.2445 |
| 0.3646 | 13.0 | 30017 | 0.5785 | 0.2459 |
| 0.3757 | 14.0 | 32326 | 0.5757 | 0.2418 |
| 0.3311 | 15.0 | 34635 | 0.5672 | 0.2455 |
| 0.3709 | 16.0 | 36944 | 0.5669 | 0.2434 |
| 0.3342 | 17.0 | 39253 | 0.5610 | 0.2455 |
| 0.3236 | 18.0 | 41562 | 0.5652 | 0.2436 |
| 0.3566 | 19.0 | 43871 | 0.5773 | 0.2407 |
| 0.2912 | 20.0 | 46180 | 0.5764 | 0.2453 |
| 0.3652 | 21.0 | 48489 | 0.5732 | 0.2423 |
| 0.3785 | 22.0 | 50798 | 0.5696 | 0.2423 |
| 0.3968 | 23.0 | 53107 | 0.5690 | 0.2429 |
| 0.2968 | 24.0 | 55416 | 0.5800 | 0.2427 |
| 0.428 | 25.0 | 57725 | 0.5704 | 0.2441 |
| 0.383 | 26.0 | 60034 | 0.5739 | 0.2450 |
| 0.3694 | 27.0 | 62343 | 0.5791 | 0.2437 |
| 0.3449 | 28.0 | 64652 | 0.5780 | 0.2451 |
| 0.3008 | 29.0 | 66961 | 0.5749 | 0.2418 |
| 0.3939 | 30.0 | 69270 | 0.5737 | 0.2424 |
| 0.3451 | 31.0 | 71579 | 0.5805 | 0.2402 |
| 0.3513 | 32.0 | 73888 | 0.5670 | 0.2379 |
| 0.3866 | 33.0 | 76197 | 0.5706 | 0.2389 |
| 0.3831 | 34.0 | 78506 | 0.5635 | 0.2401 |
| 0.3641 | 35.0 | 80815 | 0.5708 | 0.2405 |
| 0.3345 | 36.0 | 83124 | 0.5699 | 0.2405 |
| 0.2902 | 37.0 | 85433 | 0.5711 | 0.2373 |
| 0.2868 | 38.0 | 87742 | 0.5713 | 0.2389 |
| 0.3232 | 39.0 | 90051 | 0.5702 | 0.2392 |
| 0.3277 | 40.0 | 92360 | 0.5658 | 0.2393 |
| 0.3234 | 41.0 | 94669 | 0.5732 | 0.2412 |
| 0.3625 | 42.0 | 96978 | 0.5740 | 0.2396 |
| 0.4075 | 43.0 | 99287 | 0.5733 | 0.2389 |
| 0.3473 | 44.0 | 101596 | 0.5735 | 0.2394 |
| 0.3157 | 45.0 | 103905 | 0.5721 | 0.2391 |
| 0.3866 | 46.0 | 106214 | 0.5715 | 0.2381 |
| 0.4062 | 47.0 | 108523 | 0.5711 | 0.2380 |
| 0.3871 | 48.0 | 110832 | 0.5716 | 0.2380 |
| 0.2924 | 49.0 | 113141 | 0.5723 | 0.2374 |
| 0.3655 | 50.0 | 115450 | 0.5709 | 0.2379 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
abecode/t5-small-finetuned-xsum | dab35e16d9bfd1b202d003f93a2aaf05280f5100 | 2022-07-09T18:56:13.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | abecode | null | abecode/t5-small-finetuned-xsum | 18 | null | transformers | 8,899 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.3177
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4783
- Rouge1: 28.3177
- Rouge2: 7.7064
- Rougel: 22.2212
- Rougelsum: 22.2193
- Gen Len: 18.8307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7172 | 1.0 | 12753 | 2.4783 | 28.3177 | 7.7064 | 22.2212 | 22.2193 | 18.8307 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.