modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NAACL2022/spider-nq-question-encoder | d2e07f122d4ca43d40095ead872a6b9951339594 | 2022-07-09T19:15:59.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2112.07708",
"transformers"
] | feature-extraction | false | NAACL2022 | null | NAACL2022/spider-nq-question-encoder | 18 | 4 | transformers | 8,900 | # Spider-NQ: Question Encoder
This is the question encoder of the model fine-tuned on Natural Questions (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRQuestionEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-nq-question-encoder")
model = DPRQuestionEncoder.from_pretrained("NAACL2022/spider-nq-question-encoder")
question = "Who is the villain in lord of the rings"
input_dict = tokenizer(question, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
aatmasidha/distilbert-base-uncased-newsmodelclassification | e24375800e5aca9acad0a8bb500a576913012d55 | 2022-07-18T09:04:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | aatmasidha | null | aatmasidha/distilbert-base-uncased-newsmodelclassification | 18 | null | transformers | 8,901 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-newsmodelclassification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9278415074713384
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-newsmodelclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
- Accuracy: 0.928
- F1: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8104 | 1.0 | 250 | 0.3057 | 0.9105 | 0.9084 |
| 0.2506 | 2.0 | 500 | 0.2177 | 0.928 | 0.9278 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
TimKond/S-BioLinkBert-MedQuAD | 2bb9fcce285771183ba11f0539411e565bdb7403 | 2022-07-12T17:28:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | TimKond | null | TimKond/S-BioLinkBert-MedQuAD | 18 | null | sentence-transformers | 8,902 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# TimKond/S-BioLinkBert-MedQuAD
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('TimKond/S-BioLinkBert-MedQuAD')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TimKond/S-BioLinkBert-MedQuAD')
model = AutoModel.from_pretrained('TimKond/S-BioLinkBert-MedQuAD')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=TimKond/S-BioLinkBert-MedQuAD)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 17595 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 7037,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
RobertoFont/gpt2-large-bne-milunanoches | 5b32767243feb747026e2a9ca998b0619f6dbe36 | 2022-07-15T17:22:19.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | RobertoFont | null | RobertoFont/gpt2-large-bne-milunanoches | 18 | null | transformers | 8,903 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-bne-milunanoches
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-bne-milunanoches
This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-large-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-large-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.97 | 25 | 3.2210 |
| No log | 1.97 | 50 | 2.9247 |
| No log | 2.97 | 75 | 2.8850 |
| No log | 3.97 | 100 | 2.9118 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Johny201/autotrain-article_pred-1142742075 | b47ca1e9a50400459a6a328da66176cb164b6a10 | 2022-07-17T10:31:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Johny201/autotrain-data-article_pred",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Johny201 | null | Johny201/autotrain-article_pred-1142742075 | 18 | null | transformers | 8,904 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Johny201/autotrain-data-article_pred
co2_eq_emissions: 3.973071565343572
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1142742075
- CO2 Emissions (in grams): 3.973071565343572
## Validation Metrics
- Loss: 0.6098461151123047
- Accuracy: 0.7227722772277227
- Precision: 0.6805555555555556
- Recall: 0.9074074074074074
- AUC: 0.7480299448384554
- F1: 0.7777777777777779
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Johny201/autotrain-article_pred-1142742075
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Johny201/autotrain-article_pred-1142742075", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Johny201/autotrain-article_pred-1142742075", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
domenicrosati/pegasus-xsum-finetuned-paws | 1b13d84a776bb988c995b48e0eedeef8d9b0cef7 | 2022-07-17T17:20:35.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:paws",
"transformers",
"paraphrasing",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | domenicrosati | null | domenicrosati/pegasus-xsum-finetuned-paws | 18 | null | transformers | 8,905 | ---
tags:
- paraphrasing
- generated_from_trainer
datasets:
- paws
metrics:
- rouge
model-index:
- name: pegasus-xsum-finetuned-paws
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: paws
type: paws
args: labeled_final
metrics:
- name: Rouge1
type: rouge
value: 92.4371
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-finetuned-paws
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the paws dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1199
- Rouge1: 92.4371
- Rouge2: 75.4061
- Rougel: 84.1519
- Rougelsum: 84.1958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.1481 | 1.46 | 1000 | 2.0112 | 93.7727 | 73.3021 | 84.2963 | 84.2506 |
| 2.0113 | 2.93 | 2000 | 2.0579 | 93.813 | 73.4119 | 84.3674 | 84.2693 |
| 2.054 | 4.39 | 3000 | 2.0890 | 93.3926 | 73.3727 | 84.2814 | 84.1649 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
naem1023/electra-phrase-clause-classification-dev | 88c782df3662e865ed019c7d62038803f86e2a47 | 2022-07-25T05:17:52.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | naem1023 | null | naem1023/electra-phrase-clause-classification-dev | 18 | null | transformers | 8,906 | ---
license: apache-2.0
---
|
erikanesse/test-trainer-gbb-4 | 97170c598ca76bb4a008cea67e7eaa08c02574ec | 2022-07-20T20:04:33.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | erikanesse | null | erikanesse/test-trainer-gbb-4 | 18 | 1 | transformers | 8,907 | ---
tags:
- generated_from_trainer
model-index:
- name: test-trainer-gbb-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer-gbb-4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Danessely/distilroberta-base-finetuned-dna | 9738b6cf8bcc95d56e8ecc0f5e542d436d936028 | 2022-07-20T11:17:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Danessely | null | Danessely/distilroberta-base-finetuned-dna | 18 | null | transformers | 8,908 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-dna
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-dna
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1615 | 1.0 | 8014 | 1.1578 |
| 1.1559 | 2.0 | 16028 | 1.1561 |
| 1.1503 | 3.0 | 24042 | 1.1475 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DL4NLP-Group4/huawei-noahTinyBERT_General_6L_768_HotpotQA | 67cfa2244eddf60dd16ed67bb62668f16fc20f12 | 2022-07-25T09:51:16.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:HotpotQA",
"transformers",
"tag1",
"tag2",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | DL4NLP-Group4 | null | DL4NLP-Group4/huawei-noahTinyBERT_General_6L_768_HotpotQA | 18 | null | transformers | 8,909 | ---
language:
- en
tags:
- tag1
- tag2
license: apache-2.0
datasets:
- HotpotQA
metrics:
- SQuad
---
This model fine-tuned `huawei-noahTinyBERT_General_6L_768` on `HotpotQA`.
| EM | F1 |
|------------|----------|
| 31.552419 | 53.535072 |
|
Evelyn18/roberta-base-spanish-squades-modelo2 | a2ce6eafb9b0624efdfc613e63230c1addfdbb6a | 2022-07-22T23:23:22.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-modelo2 | 18 | null | transformers | 8,910 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-modelo2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo2
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 1.8825 |
| No log | 2.0 | 12 | 1.7787 |
| No log | 3.0 | 18 | 2.0521 |
| No log | 4.0 | 24 | 2.2991 |
| No log | 5.0 | 30 | 2.4029 |
| No log | 6.0 | 36 | 2.4358 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln58Paraphrase | e0f66e221429b8dec0707ccef1fa7b94506d8a3a | 2022-07-26T23:10:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln58Paraphrase | 18 | null | transformers | 8,911 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln58Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln58Paraphrase")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
make longer
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet: embodies compassion.
longer: is the personification of compassion.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: work in an office ).
translated into journalism speak: ( beaver away in windowless offices / toil in drab cubicles / clock in at faceless workstations / report for duty in cheerless quarters / log hours in colorless confines / clack away on keyboards in offices with cinderblock walls / stare at computer screens in bland partitions / shuffle through mounds of paperwork in humdrum offices ).
***
original: easy job ).
translated into journalism speak: ( cushy / hassle-free / uninvolved / vanilla / sedentary / straightforward / effortless / lax / plush / frictionless / painless ) ( gig / perch / post / trade / calling / paycheck ).
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
original: big businesses ).
translated into journalism speak: corporate ( behemoths / heavyweights / titans / steamrollers / powerhouses / bigwigs / kahunas / brutes / honchos / barons / kingpins / rainmakers / headliners ).
***
original: environmental movement ).
translated into journalism speak: ( green lobby / conservationist camp / tree-huggers / ecology-obsessed / sustainability crusaders / preservation-crazed / ecological campaigners ).
***
original:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
``` |
simecek/knotted_proteins_demo_model | beb41f53eb7ef8e560fa0f6d047777f1dcf384de | 2022-07-27T10:35:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | simecek | null | simecek/knotted_proteins_demo_model | 18 | null | transformers | 8,912 | Entry not found |
xlm-mlm-enro-1024 | 7f160633a5699aafb54fe49e89dd7fe52afefc67 | 2022-07-22T08:08:35.000Z | [
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"ro",
"arxiv:1901.07291",
"arxiv:1910.09700",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | null | null | xlm-mlm-enro-1024 | 17 | null | transformers | 8,913 | ---
language:
- multilingual
- en
- ro
license: cc-by-nc-4.0
---
# xlm-mlm-enro-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-Romanian
- **License:** license: cc-by-nc-4.0
- **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the [WMT'16 English-Romanian](https://huggingface.co/datasets/wmt16) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-enro-1024 results, see Tables 1-3 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. |
AI-Nordics/bert-large-swedish-cased | b7925d4c25c2ec8ebc0e73493c18180e5875d34e | 2022-02-15T16:52:53.000Z | [
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AI-Nordics | null | AI-Nordics/bert-large-swedish-cased | 17 | 5 | transformers | 8,914 | ---
language: sv
---
# A Swedish Bert model
## Model description
This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 340M |
| \\(n_{layers}\\) | 24 |
| \\(n_{heads}\\) | 16 |
| \\(n_{ctx}\\) | 1024 |
| \\(n_{vocab}\\) | 30592 |
## Training data
The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below.
<figure>
| Dataset | Genre | Size(GB)|
|----------------------|------|------|
| Anföranden | Politics |0.9|
|DCEP|Politics|0.6|
|DGT|Politics|0.7|
|Fass|Medical|0.6|
|Författningar|Legal|0.1|
|Web data|Misc|45.0|
|JRC|Legal|0.4|
|Litteraturbanken|Books|0.3O|
|SCAR|Misc|28.0|
|SOU|Politics|5.3|
|Subtitles|Drama|1.3|
|Wikipedia|Facts|1.8|
## Intended uses & limitations
The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
<br>
<br>
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("AI-Nordics/bert-large-swedish-cased")
model = AutoModelForMaskedLM.from_pretrained("AI-Nordics/bert-large-swedish-cased")
|
ARTeLab/it5-summarization-mlsum | 5373dc3b5f7d63a7f338a5c565dd5a18b6376805 | 2022-05-03T06:06:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/mlsum-it",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ARTeLab | null | ARTeLab/it5-summarization-mlsum | 17 | null | transformers | 8,915 | ---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_mlsum
results: []
datasets:
- ARTeLab/mlsum-it
---
# summarization_mlsum
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on MLSum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 2.0190
- Rouge1: 19.3739
- Rouge2: 5.9753
- Rougel: 16.691
- Rougelsum: 16.7862
- Gen Len: 32.5268
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-mlsum")
model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-mlsum")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
# Citation
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
AkshatSurolia/ConvNeXt-FaceMask-Finetuned | 5127ba7a1dc3ca50b44c2bf75326818fa1bc8d37 | 2022-02-18T13:51:14.000Z | [
"pytorch",
"convnext",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0"
] | image-classification | false | AkshatSurolia | null | AkshatSurolia/ConvNeXt-FaceMask-Finetuned | 17 | null | transformers | 8,916 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# ConvNeXt for Face Mask Detection
ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.
## Training Metrics
epoch = 3.54
total_flos = 1195651761GF
train_loss = 0.0079
train_runtime = 1:08:20.25
train_samples_per_second = 14.075
train_steps_per_second = 0.22
---
## Evaluation Metrics
epoch = 3.54
eval_accuracy = 0.9961
eval_loss = 0.0151
eval_runtime = 0:01:23.47
eval_samples_per_second = 43.079
eval_steps_per_second = 5.391 |
Aleksandar/bert-srb-ner | 1774bdf93f0afd493632d7ebd5d3dc4e3e3c31c6 | 2021-09-07T21:20:22.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | false | Aleksandar | null | Aleksandar/bert-srb-ner | 17 | null | transformers | 8,917 | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-srb-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: sr
metric:
name: Accuracy
type: accuracy
value: 0.9546696220907545
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3561
- Precision: 0.8909
- Recall: 0.9082
- F1: 0.8995
- Accuracy: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3907 | 1.0 | 625 | 0.2316 | 0.8255 | 0.8314 | 0.8285 | 0.9259 |
| 0.2091 | 2.0 | 1250 | 0.1920 | 0.8598 | 0.8731 | 0.8664 | 0.9420 |
| 0.1562 | 3.0 | 1875 | 0.1833 | 0.8608 | 0.8820 | 0.8713 | 0.9441 |
| 0.0919 | 4.0 | 2500 | 0.1985 | 0.8712 | 0.8886 | 0.8798 | 0.9476 |
| 0.0625 | 5.0 | 3125 | 0.2195 | 0.8762 | 0.8923 | 0.8842 | 0.9485 |
| 0.0545 | 6.0 | 3750 | 0.2320 | 0.8706 | 0.9004 | 0.8852 | 0.9495 |
| 0.0403 | 7.0 | 4375 | 0.2459 | 0.8817 | 0.8957 | 0.8887 | 0.9505 |
| 0.0269 | 8.0 | 5000 | 0.2603 | 0.8813 | 0.9021 | 0.8916 | 0.9516 |
| 0.0193 | 9.0 | 5625 | 0.2916 | 0.8812 | 0.8949 | 0.8880 | 0.9500 |
| 0.0162 | 10.0 | 6250 | 0.2938 | 0.8814 | 0.9025 | 0.8918 | 0.9520 |
| 0.0134 | 11.0 | 6875 | 0.3330 | 0.8809 | 0.8961 | 0.8885 | 0.9497 |
| 0.0076 | 12.0 | 7500 | 0.3141 | 0.8840 | 0.9025 | 0.8932 | 0.9524 |
| 0.0069 | 13.0 | 8125 | 0.3292 | 0.8819 | 0.9065 | 0.8940 | 0.9535 |
| 0.0053 | 14.0 | 8750 | 0.3454 | 0.8844 | 0.9018 | 0.8930 | 0.9523 |
| 0.0038 | 15.0 | 9375 | 0.3519 | 0.8912 | 0.9061 | 0.8986 | 0.9539 |
| 0.0034 | 16.0 | 10000 | 0.3437 | 0.8894 | 0.9038 | 0.8965 | 0.9539 |
| 0.0024 | 17.0 | 10625 | 0.3518 | 0.8896 | 0.9072 | 0.8983 | 0.9543 |
| 0.0018 | 18.0 | 11250 | 0.3572 | 0.8877 | 0.9072 | 0.8973 | 0.9543 |
| 0.0015 | 19.0 | 11875 | 0.3554 | 0.8910 | 0.9081 | 0.8994 | 0.9549 |
| 0.0011 | 20.0 | 12500 | 0.3561 | 0.8909 | 0.9082 | 0.8995 | 0.9547 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aries/T5_question_answering | 03a6a5b19f0b7b776ae4c7b3dac91464a8f59c71 | 2021-06-23T02:02:37.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aries | null | Aries/T5_question_answering | 17 | null | transformers | 8,918 | Entry not found |
BSC-TeMU/roberta-base-bne-sqac | f86245feb25c0ed586c0ca0bbcd9d35e0e83f7b7 | 2021-10-21T10:30:10.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | BSC-TeMU | null | BSC-TeMU/roberta-base-bne-sqac | 17 | 3 | transformers | 8,919 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "qa"
- "question answering"
datasets:
- "BSC-TeMU/SQAC"
metrics:
- "f1"
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac
# Spanish RoBERTa-base trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset.
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the [SQAC corpus](https://huggingface.co/datasets/BSC-TeMU/SQAC).
## Evaluation and results
F1 Score: 0.7923 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | 1e8669667dc99b989defa2bb336c866dff546528 | 2021-10-17T11:14:08.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | 17 | null | transformers | 8,920 | ---
language:
- ar
license: apache-2.0
widget:
- text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
---
# CAMeLBERT-CA NER Model
## Model description
**CAMeLBERT-CA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-ca-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CenIA/albert-base-spanish-finetuned-mldoc | 60c591701e9640db2aa2d4b3cb63cf59a0c81e0f | 2022-01-10T10:15:50.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-base-spanish-finetuned-mldoc | 17 | null | transformers | 8,921 | Entry not found |
CenIA/bert-base-spanish-wwm-uncased-finetuned-ner | 82c7146b2bba0bc24f9e8284abc9bd45999f5735 | 2021-12-28T21:18:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-ner | 17 | null | transformers | 8,922 | Entry not found |
CleveGreen/JobClassifier | a00e957489ddc0ca9a22d11d5dd9b7c4c92a7bd7 | 2021-08-03T18:10:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CleveGreen | null | CleveGreen/JobClassifier | 17 | null | transformers | 8,923 | Entry not found |
Davlan/xlm-roberta-base-finetuned-luo | 867d5ab268a019e58091c7e4aada56b95cc045ab | 2021-06-30T21:21:39.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"luo",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-luo | 17 | null | transformers | 8,924 | Hugging Face's logo
---
language: luo
datasets:
---
# xlm-roberta-base-finetuned-luo
## Model description
**xlm-roberta-base-finetuned-luo** is a **Luo RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Luo language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Luo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-luo')
>>> unmasker("Obila ma Changamwe <mask> pedho achije angwen mag njore")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | luo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 74.86 | 75.27
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-wolof | b1a3292ca97ed8112b28cc7107d761e1e3783fa1 | 2021-06-30T15:56:31.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"wo",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-wolof | 17 | null | transformers | 8,925 | Hugging Face's logo
---
language: wo
datasets:
---
# xlm-roberta-base-finetuned-wolof
## Model description
**xlm-roberta-base-finetuned-luganda** is a **Wolof RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Wolof language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Wolof corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-wolof')
>>> unmasker("Màkki Sàll feeñal na ay xalaatam ci mbir yu am solo yu soxal <mask> ak Afrik.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Bible OT](http://biblewolof.com/) + [OPUS](https://opus.nlpl.eu/) + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | wo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 63.86 | 68.31
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-yoruba | 6d9e5182a87d1a4e8b2a3feb66fc7c6b3c665b45 | 2021-05-28T13:53:56.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"yo",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-yoruba | 17 | null | transformers | 8,926 | Hugging Face's logo
---
language: yo
datasets:
---
# xlm-roberta-base-finetuned-yoruba
## Model description
**xlm-roberta-base-finetuned-yoruba** is a **Yoruba RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Yorùbá language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Yorùbá corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-yoruba')
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ <mask> Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
[{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.24844281375408173,
'token': 44109,
'token_str': '▁Queen'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ile Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.1665010154247284,
'token': 1350,
'token_str': '▁ile'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.07604238390922546,
'token': 1053,
'token_str': '▁ti'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ baba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.06353845447301865,
'token': 12878,
'token_str': '▁baba'},
{'sequence': '<s> Arẹmọ Phillip to jẹ ọkọ Oba Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun</s>', 'score': 0.03836742788553238,
'token': 82879,
'token_str': '▁Oba'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | yo_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 77.58 | 83.66
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | |
### BibTeX entry and citation info
By David Adelani
```
```
|
EhsanAghazadeh/xlnet-large-cased-CoLA_A | 48a6ccfe1c747c145e5f74d93f4c85bd35bf43c4 | 2021-04-19T10:05:16.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/xlnet-large-cased-CoLA_A | 17 | null | transformers | 8,927 | Entry not found |
Emmanuel/bert-finetuned-ner | a6e1e133d710c8cbd1c251326c220fd6a366098f | 2021-12-01T11:05:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Emmanuel | null | Emmanuel/bert-finetuned-ner | 17 | null | transformers | 8,928 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9317394888705688
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9412842508536686
- name: Accuracy
type: accuracy
value: 0.9865779713898863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9317
- Recall: 0.9510
- F1: 0.9413
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0872 | 1.0 | 1756 | 0.0660 | 0.9152 | 0.9350 | 0.9250 | 0.9827 |
| 0.0386 | 2.0 | 3512 | 0.0579 | 0.9374 | 0.9498 | 0.9436 | 0.9864 |
| 0.0225 | 3.0 | 5268 | 0.0603 | 0.9317 | 0.9510 | 0.9413 | 0.9866 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Geotrend/distilbert-base-25lang-cased | fda800c1a39bf7e8f2498f1646fa60dd0e40eb6c | 2021-07-26T16:11:17.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-25lang-cased | 17 | 1 | transformers | 8,929 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# distilbert-base-25lang-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-25lang-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-25lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-ja-cased | a0d6bc69bf1e1d710a37f2beba947fee997bd530 | 2021-07-29T17:01:48.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"ja",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-ja-cased | 17 | null | transformers | 8,930 | ---
language: ja
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-ja-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ja-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-ccs-en | 7b9239b35748e55442b0f85d89614a1156c371b1 | 2021-01-18T07:53:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ka",
"ccs",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ccs-en | 17 | null | transformers | 8,931 | ---
language:
- ka
- ccs
- en
tags:
- translation
license: apache-2.0
---
### ccs-eng
* source group: South Caucasian languages
* target group: English
* OPUS readme: [ccs-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md)
* model: transformer
* source language(s): kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kat-eng.kat.eng | 18.0 | 0.357 |
| Tatoeba-test.multi.eng | 18.0 | 0.357 |
### System Info:
- hf_name: ccs-eng
- source_languages: ccs
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ka', 'ccs', 'en']
- src_constituents: {'kat'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt
- src_alpha3: ccs
- tgt_alpha3: eng
- short_pair: ccs-en
- chrF2_score: 0.35700000000000004
- bleu: 18.0
- brevity_penalty: 1.0
- ref_len: 5992.0
- src_name: South Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: ccs
- tgt_alpha2: en
- prefer_old: False
- long_pair: ccs-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-crs-en | 7ee4bb979dd28886b7d98f890298c4548e84a847 | 2021-09-09T21:28:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"crs",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-crs-en | 17 | null | transformers | 8,932 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-crs-en
* source languages: crs
* target languages: en
* OPUS readme: [crs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.en | 42.9 | 0.589 |
|
Helsinki-NLP/opus-mt-da-es | 59b50e55d16babe69b0facb1fb1c4dfb175328fe | 2021-09-09T21:29:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-da-es | 17 | null | transformers | 8,933 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-da-es
* source languages: da
* target languages: es
* OPUS readme: [da-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.es | 53.7 | 0.715 |
|
Helsinki-NLP/opus-mt-de-hu | 4b30440320ea86d33b6927fe70c46e20f671da86 | 2021-09-09T21:31:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-hu | 17 | null | transformers | 8,934 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-hu
* source languages: de
* target languages: hu
* OPUS readme: [de-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-hu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.hu | 34.3 | 0.588 |
|
Helsinki-NLP/opus-mt-el-sv | e8894cf2f5713e1cc68fe7710636ecc4b4dc99d7 | 2021-09-09T21:33:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"el",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-el-sv | 17 | null | transformers | 8,935 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-el-sv
* source languages: el
* target languages: sv
* OPUS readme: [el-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.el.sv | 23.6 | 0.498 |
|
Helsinki-NLP/opus-mt-en-ln | 95c6ade5cb0569f7f73a98a8b1dbb4955ddd3107 | 2021-09-09T21:36:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ln",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ln | 17 | null | transformers | 8,936 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ln
* source languages: en
* target languages: ln
* OPUS readme: [en-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ln | 36.7 | 0.588 |
|
Helsinki-NLP/opus-mt-en-ng | 02eba1d2ddf774aec3558ae031c3795d4ded61c8 | 2021-09-09T21:37:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ng",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ng | 17 | null | transformers | 8,937 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ng
* source languages: en
* target languages: ng
* OPUS readme: [en-ng](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ng/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ng | 24.8 | 0.496 |
|
Helsinki-NLP/opus-mt-eo-es | a79df1be257257e0247b626143f263d7a6b28ab8 | 2021-09-09T21:40:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-es | 17 | null | transformers | 8,938 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-eo-es
* source languages: eo
* target languages: es
* OPUS readme: [eo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.es | 44.2 | 0.631 |
|
Helsinki-NLP/opus-mt-es-ber | f7a613f7b3b150e1edeeaebcd692388cbe55dc74 | 2021-09-09T21:41:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ber",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ber | 17 | null | transformers | 8,939 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ber
* source languages: es
* target languages: ber
* OPUS readme: [es-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ber | 21.8 | 0.444 |
|
Helsinki-NLP/opus-mt-es-ro | 6f05b59d19efade88c6b62c383d542ddadda6d5c | 2021-09-09T21:44:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ro",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ro | 17 | null | transformers | 8,940 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ro
* source languages: es
* target languages: ro
* OPUS readme: [es-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ro | 45.7 | 0.666 |
|
Helsinki-NLP/opus-mt-eu-ru | 5e5ec0f9c48f49314b9c83d0f6b338d4efa81fef | 2021-01-18T08:31:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eu",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eu-ru | 17 | null | transformers | 8,941 | ---
language:
- eu
- ru
tags:
- translation
license: apache-2.0
---
### eus-rus
* source group: Basque
* target group: Russian
* OPUS readme: [eus-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eus-rus/README.md)
* model: transformer-align
* source language(s): eus
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eus.rus | 31.3 | 0.502 |
### System Info:
- hf_name: eus-rus
- source_languages: eus
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eus-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eu', 'ru']
- src_constituents: {'eus'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eus-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eus-rus/opus-2020-06-16.test.txt
- src_alpha3: eus
- tgt_alpha3: rus
- short_pair: eu-ru
- chrF2_score: 0.502
- bleu: 31.3
- brevity_penalty: 0.9420000000000001
- ref_len: 2428.0
- src_name: Basque
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: eu
- tgt_alpha2: ru
- prefer_old: False
- long_pair: eus-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI | 56e0e8ec89bd4161facd110b562332a309596562 | 2021-09-09T21:52:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"nb",
"no",
"nn",
"ru",
"sv",
"en",
"sami",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI | 17 | null | transformers | 8,942 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi_nb_no_nn_ru_sv_en-SAMI
* source languages: fi,nb,no,nn,ru,sv,en
* target languages: se,sma,smj,smn,sms
* OPUS readme: [fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/README.md)
* dataset: opus+giella
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus+giella-2020-04-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/opus+giella-2020-04-18.zip)
* test set translations: [opus+giella-2020-04-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/opus+giella-2020-04-18.test.txt)
* test set scores: [opus+giella-2020-04-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/opus+giella-2020-04-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| giella.fi.sms | 58.4 | 0.776 |
|
Helsinki-NLP/opus-mt-kwy-en | b369959cbe8dfdb1dceab0263ec2d7d1243deeb1 | 2021-09-10T13:54:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kwy",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kwy-en | 17 | null | transformers | 8,943 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kwy-en
* source languages: kwy
* target languages: en
* OPUS readme: [kwy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwy-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kwy.en | 31.6 | 0.466 |
|
Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa | a28144143d1d5e600ba792309d0ca0befa49cc41 | 2021-12-05T13:40:34.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"en",
"arxiv:2111.05754",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Intel | null | Intel/distilbert-base-uncased-sparse-85-unstructured-pruneofa | 17 | null | transformers | 8,944 | ---
language: en
---
# 85% Sparse DistilBERT-Base (uncased) Prune OFA
This model is a result from our paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all). |
Kamel/t5-darija-summarization | a37681e9e9616578ea07b11011686e7c290755ed | 2022-05-24T08:40:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ar",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Kamel | null | Kamel/t5-darija-summarization | 17 | null | transformers | 8,945 | ---
language: ar
widget:
- text: " كشف الملياردير الميريكاني ومؤسس شركة “مايكروسوفت”، بيل كَيتس، بللي ماعندوش حتى فلوس رقمية، وكيفضل يستثمر فلوسو فالأشياء اللي عندها قيمة، حسب كلامو. جريدة “بريطانية قالت أن تصريحات كَيتس على العملات المشفرة كانت بمناسبة حدث “سولني على أي حاجة”، اللي تنظم على موقع “ريديت” الشهير.بيل كَيتس اللي واصلة لافورتين ديالو ل116 مليار دولار، وهو رابع أغنى رجل فالعالم، جات تصريحاتو بالتزامن مع خسارة العملات الرقمية لتريليون دولار من قيمتها فعام 2022، وضاعت فحوالي 200 مليار دولار من قيمتها ف24 ساعة فقط فوقت سابق من هذا الشهر."
--- |
KoichiYasuoka/bert-base-japanese-unidic-luw-upos | b9ed3ff80dec8b8890849f75b001525385081eda | 2022-05-23T16:18:10.000Z | [
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-base-japanese-unidic-luw-upos | 17 | null | transformers | 8,946 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# bert-base-japanese-unidic-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Langame/convai-gpt-j-6B-8bit | 3bb74579a62111484f8da22fb2c9aac80e10b586 | 2021-12-28T20:20:49.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | Langame | null | Langame/convai-gpt-j-6B-8bit | 17 | 1 | transformers | 8,947 | Entry not found |
LiqiangXiao/summarization | 895bc8522abd1a957220caca3f6812026bce7fd7 | 2022-01-20T05:01:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | LiqiangXiao | null | LiqiangXiao/summarization | 17 | 4 | transformers | 8,948 | ## Copy-or-Rewrite
This repository contains the code of paper "Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model.
## Model description
Copy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models.
## Intended uses & limitations
With this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed.
## How to use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/summarization")
## Training data
This model used the non-anonymous version of CNN/Daily Mail dataset.
## BibTeX entry and citation info
@inproceedings{DBLP:conf/aaai/XiaoWHJ20,
author = {Liqiang Xiao and
Lu Wang and
Hao He and
Yaohui Jin},
title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement
Learning},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {9306--9313},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6470},
timestamp = {Tue, 02 Feb 2021 08:00:14 +0100},
biburl = {https://dblp.org/rec/conf/aaai/XiaoWHJ20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
|
Maelstrom77/roberta-large-mnli | 97ce0942bc885c801e7130270360da8a3df4e3ba | 2021-10-04T14:15:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Maelstrom77 | null | Maelstrom77/roberta-large-mnli | 17 | null | transformers | 8,949 | Entry not found |
Magolor/deepstruct | a9d6ab6a2dc7d55530a047055a2c831ff00ad7bb | 2022-07-07T07:38:26.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Magolor | null | Magolor/deepstruct | 17 | null | transformers | 8,950 | Entry not found |
NDugar/deberta-v2-xlarge-mnli | 9eb624deda78b9b426bb8ebebe6d22ea3ddfb520 | 2021-12-17T17:05:08.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"transformers",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
] | zero-shot-classification | false | NDugar | null | NDugar/deberta-v2-xlarge-mnli | 17 | null | transformers | 8,951 | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
I tried to train v3 xl to mnli using my own training code and got this result. |
Norod78/hebrew_poetry-gpt_neo-tiny | e54c4c90fb550e9cc8c1cc1b35f4ebefcf4fefa7 | 2022-07-04T07:26:05.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"he",
"transformers",
"license:mit"
] | text-generation | false | Norod78 | null | Norod78/hebrew_poetry-gpt_neo-tiny | 17 | null | transformers | 8,952 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "שתי רכבות דוהרות בתוך עיני"
- text: "הים כחול ואני"
- text: "שם היצירה:"
- text: "רציתי"
license: mit
---
# hebrew_poetry-gpt_neo-tiny
Hebrew poetry text generation model, fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Datasets
1. Text from [New stage](http://stage.co.il/)
2. A dataset containing Hebrew lyrics
|
Palak/xlm-roberta-large_squad | 63239100dd057b6e558c206136645a36f3e5a485 | 2021-12-25T20:19:12.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Palak | null | Palak/xlm-roberta-large_squad | 17 | null | transformers | 8,953 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xlm-roberta-base_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad dataset.
- eval_exact_match": 85.96026490066225
- "eval_f1": 92.25000664341768
- "eval_samples": 10918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.67
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SEBIS/legal_t5_small_summ_multitask_en | 2a5c14ca327fe3fc93ed077e69afd90157c7b04d | 2021-06-23T11:25:29.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_multitask_en | 17 | null | transformers | 8,954 | Entry not found |
Sid51/ChanBot | 77c32d28ac08838896cc360cdf7df03880d90735 | 2021-06-12T17:02:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Sid51 | null | Sid51/ChanBot | 17 | null | transformers | 8,955 | Entry not found |
Tereveni-AI/gpt2-124M-uk-fiction | 50a873cc91bf7ebb4f864884b27de35ac96b2c96 | 2021-05-21T11:16:43.000Z | [
"pytorch",
"jax",
"gpt2",
"uk",
"transformers"
] | null | false | Tereveni-AI | null | Tereveni-AI/gpt2-124M-uk-fiction | 17 | 2 | transformers | 8,956 | ---
language: uk
---
Note: **default code snippet above won't work** because we are using `AlbertTokenizer` with `GPT2LMHeadModel`, see [issue](https://github.com/huggingface/transformers/issues/4285).
## GPT2 124M Trained on Ukranian Fiction
### Training details
Model was trained on corpus of 4040 fiction books, 2.77 GiB in total.
Evaluation on [brown-uk](https://github.com/brown-uk/corpus) gives perplexity of 50.16.
### Example usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
model = GPT2LMHeadModel.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
input_ids = tokenizer.encode("Но зла Юнона, суча дочка,", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=3,
max_length=50
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
```
Prints something like this:
```bash
0: Но зла Юнона, суча дочка, яка затьмарила всі її таємниці: І хто з'їсть її душу, той помре». І, не дочекавшись гніву богів, посунула в пітьму, щоб не бачити перед собою. Але, за
1: Но зла Юнона, суча дочка, і довела мене до божевілля. Але він не знав нічого. Після того як я його побачив, мені стало зле. Я втратив рівновагу. Але в мене не було часу на роздуми. Я вже втратив надію
2: Но зла Юнона, суча дочка, не нарікала нам! — раптом вигукнула Юнона. — Це ти, старий йолопе! — мовила вона, не перестаючи сміятись. — Хіба ти не знаєш, що мені подобається ходити з тобою?
``` |
Yv/bert-finetuned-ner | 7317243761a7afb2022d2258a0da636638d3f993 | 2021-12-23T13:08:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Yv | null | Yv/bert-finetuned-ner | 17 | null | transformers | 8,957 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9369817578772802
- name: Recall
type: recall
value: 0.9508582968697409
- name: F1
type: f1
value: 0.9438690277313732
- name: Accuracy
type: accuracy
value: 0.9868575969859305
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0598
- Precision: 0.9370
- Recall: 0.9509
- F1: 0.9439
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0871 | 1.0 | 1756 | 0.0633 | 0.9197 | 0.9362 | 0.9279 | 0.9833 |
| 0.0386 | 2.0 | 3512 | 0.0572 | 0.9351 | 0.9483 | 0.9417 | 0.9866 |
| 0.0214 | 3.0 | 5268 | 0.0598 | 0.9370 | 0.9509 | 0.9439 | 0.9869 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
airKlizz/xlm-roberta-base-germeval21-toxic-with-data-augmentation | e70c08c3fdcefc97a19dd2d3683a4907295835a3 | 2021-07-12T14:44:31.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | airKlizz | null | airKlizz/xlm-roberta-base-germeval21-toxic-with-data-augmentation | 17 | null | transformers | 8,958 | Entry not found |
alireza7/TRANSFORMER-persian-base-PN-summary | 26cb3aa982743edc58b5b6649a7084321fcc3aa2 | 2021-09-29T19:26:30.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/TRANSFORMER-persian-base-PN-summary | 17 | null | transformers | 8,959 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2 | 68d021702db7486df720c1fa321611456f5105ad | 2021-11-19T10:36:39.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | amazon-sagemaker-community | null | amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2 | 17 | null | transformers | 8,960 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-en-ru-emoji-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-en-ru-emoji-v2
This model is a fine-tuned version of [DeepPavlov/xlm-roberta-large-en-ru](https://huggingface.co/DeepPavlov/xlm-roberta-large-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3356
- Accuracy: 0.3102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 200 | 3.0592 | 0.1204 |
| No log | 0.81 | 400 | 2.5356 | 0.2480 |
| 2.6294 | 1.21 | 600 | 2.4570 | 0.2569 |
| 2.6294 | 1.62 | 800 | 2.3332 | 0.2832 |
| 1.9286 | 2.02 | 1000 | 2.3354 | 0.2803 |
| 1.9286 | 2.42 | 1200 | 2.3610 | 0.2881 |
| 1.9286 | 2.83 | 1400 | 2.3004 | 0.2973 |
| 1.7312 | 3.23 | 1600 | 2.3619 | 0.3026 |
| 1.7312 | 3.64 | 1800 | 2.3596 | 0.3032 |
| 1.5816 | 4.04 | 2000 | 2.2972 | 0.3072 |
| 1.5816 | 4.44 | 2200 | 2.3077 | 0.3073 |
| 1.5816 | 4.85 | 2400 | 2.3356 | 0.3102 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
tner/xlm-roberta-large-fin | 3f890642c00d19fa8a5acd7d3d5217f17705e80d | 2021-02-13T00:04:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-fin | 17 | null | transformers | 8,961 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-fin")
``` |
bettertextapp/m2m_1.2B_paraphrase_en_de_v1 | b6a8b5bb37161fe24e583fc707492d7c75f31a17 | 2022-02-14T22:18:29.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | bettertextapp | null | bettertextapp/m2m_1.2B_paraphrase_en_de_v1 | 17 | null | transformers | 8,962 | Entry not found |
bhavikardeshna/xlm-roberta-base-chinese | 15724e33738f0cee130f28b9fdd4f374280afcbf | 2021-12-21T11:40:50.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
] | question-answering | false | bhavikardeshna | null | bhavikardeshna/xlm-roberta-base-chinese | 17 | null | transformers | 8,963 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bigwiz83/sapbert-from-pubmedbert-squad2 | 5a14b59be173f2073caae87ddd7a1e5a3ee3053f | 2021-07-02T12:05:14.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"autotrain_compatible"
] | question-answering | false | bigwiz83 | null | bigwiz83/sapbert-from-pubmedbert-squad2 | 17 | null | transformers | 8,964 | ---
datasets:
- squad_v2
model_index:
- name: sapbert-from-pubmedbert-squad2
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad_v2
type: squad_v2
args: squad_v2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sapbert-from-pubmedbert-squad2
This model is a fine-tuned version of [cambridgeltl/SapBERT-from-PubMedBERT-fulltext](https://huggingface.co/cambridgeltl/SapBERT-from-PubMedBERT-fulltext) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.035 | 1.0 | 8298 | 0.9545 |
| 0.8053 | 2.0 | 16596 | 0.9988 |
| 0.5949 | 3.0 | 24894 | 0.9909 |
| 0.4878 | 4.0 | 33192 | 1.1428 |
| 0.3932 | 5.0 | 41490 | 1.2582 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.8.0
- Datasets 1.4.1
- Tokenizers 0.10.2
|
cahya/wav2vec2-base-turkish | e9a97a269d58efb5393becfb7f55a484e0070e80 | 2022-03-23T18:26:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-turkish | 17 | 4 | transformers | 8,965 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2 Base Turkish by Cahya
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: mozilla-foundation/common_voice_7_0
args: tr
metrics:
- name: Test WER
type: wer
value: 9.437
- name: Test CER
type: cer
value: 3.325
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: tr
metrics:
- name: Test WER
type: wer
value: 8.147
- name: Test CER
type: cer
value: 2.802
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 28.011
- name: Test CER
type: cer
value: 10.66
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 33.62
---
#
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
| | Dataset | WER | CER |
|---|-------------------------------|---------|----------|
| 1 | Common Voice 6.1 | 9.437 | 3.325 |
| 2 | Common Voice 7.0 | 8.147 | 2.802 |
| 3 | Common Voice 8.0 | 8.335 | 2.336 |
| 4 | Speech Recognition Community | 28.011 | 10.66 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) 'train', 'validation' and 'other' split were used for training.
- [Media Speech](https://www.openslr.org/108/)
- [Magic Hub](https://magichub.com/)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-06
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1224 | 3.45 | 500 | 0.1641 | 0.1396 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
chompk/wav2vec2-large-xlsr-thai-tokenized | 34e0e546655dc18f64ba77bb6fe9734099179432 | 2021-07-06T00:36:51.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning",
"license:apache-2.0"
] | automatic-speech-recognition | false | chompk | null | chompk/wav2vec2-large-xlsr-thai-tokenized | 17 | 1 | transformers | 8,966 | ---
language: th
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning
license: apache-2.0
---
# Wav2Vec2-Large-XLSR-53 in Thai Language (Train with deepcut tokenizer)
|
clue/xlnet_chinese_large | f3b3caf82b5f00cfcd9a8c0aeb410045a1ffb3d4 | 2020-12-11T21:36:08.000Z | [
"pytorch",
"xlnet",
"zh",
"transformers"
] | null | false | clue | null | clue/xlnet_chinese_large | 17 | null | transformers | 8,967 | ---
language: zh
---
## xlnet_chinese_large
### Overview
**Language model:** xlnet-large
**Model size:** 1.3G
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
```
import torch
from transformers import XLNetTokenizer,XLNetModel
tokenizer = XLNetTokenizer.from_pretrained("clue/xlnet_chinese_large")
xlnet = XLNetModel.from_pretrained("clue/xlnet_chinese_large")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
cointegrated/rut5-base-review | 1d402e7b4c0e9f35f5339066031110d64f789c13 | 2021-10-17T17:54:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cointegrated | null | cointegrated/rut5-base-review | 17 | null | transformers | 8,968 | Entry not found |
coppercitylabs/uzbert-base-uncased | 6eeb1c69ceff3201f343cff4a1d9b8148d06fbac | 2021-09-22T08:17:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"uz",
"dataset:webcrawl",
"arxiv:2108.09814",
"transformers",
"uzbert",
"uzbek",
"cyrillic",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | coppercitylabs | null | coppercitylabs/uzbert-base-uncased | 17 | null | transformers | 8,969 | ---
language: uz
tags:
- uzbert
- uzbek
- bert
- cyrillic
license: mit
datasets:
- webcrawl
---
# UzBERT base model (uncased)
Pretrained model on Uzbek language (Cyrillic script) using a masked
language modeling and next sentence prediction objectives.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='coppercitylabs/uzbert-base-uncased')
>>> unmasker("Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг [MASK], мутафаккири ва давлат арбоби бўлган.")
[
{
'token_str': 'шоири',
'token': 13587,
'score': 0.7974384427070618,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг шоири, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'олими',
'token': 18500,
'score': 0.09166576713323593,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг олими, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'асосчиси',
'token': 7469,
'score': 0.02451123297214508,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг асосчиси, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'ёзувчиси',
'token': 22439,
'score': 0.017601722851395607,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг ёзувчиси, мутафаккир ##и ва давлат арбоби бўлган.'
},
{
'token_str': 'устози',
'token': 11494,
'score': 0.010115668177604675,
'sequence': 'алишер навоий – улуғ ўзбек ва бошқа туркий халқларнинг устози, мутафаккир ##и ва давлат арбоби бўлган.'
}
]
```
## Training data
UzBERT model was pretrained on \~625K news articles (\~142M words).
## BibTeX entry and citation info
```bibtex
@misc{mansurov2021uzbert,
title={{UzBERT: pretraining a BERT model for Uzbek}},
author={B. Mansurov and A. Mansurov},
year={2021},
eprint={2108.09814},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dannersm/wav2vec2-large-xlsr-53-chilean-lessons | fc5388b20b8afb4be53bf80d1de2dca741bb3262 | 2022-06-27T19:29:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | dannersm | null | dannersm/wav2vec2-large-xlsr-53-chilean-lessons | 17 | null | transformers | 8,970 | Entry not found |
davanstrien/vit-manuscripts | 3fa6a7df4cca9cc4ddac498fdf3f9927b3adc7eb | 2022-02-02T22:40:58.000Z | [
"pytorch",
"tensorboard",
"vit_mae",
"pretraining",
"transformers",
"masked-auto-encoding",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | davanstrien | null | davanstrien/vit-manuscripts | 17 | null | transformers | 8,971 | ---
license: apache-2.0
tags:
- masked-auto-encoding
- generated_from_trainer
model-index:
- name: vit-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-manuscripts
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/manuscript_iiif_test dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5303 | 1.0 | 34 | 0.5134 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
dbsamu/deberta-base-finetuned-ner | 78d2a24e72ddf68e980726259541f0609409b7f0 | 2022-01-21T18:25:55.000Z | [
"pytorch",
"tensorboard",
"deberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | dbsamu | null | dbsamu/deberta-base-finetuned-ner | 17 | 1 | transformers | 8,972 | Entry not found |
digit82/kogpt2-summarization | d93f0fd19efd0368bc6ca379b5b0a96845d8f439 | 2021-09-22T14:45:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | digit82 | null | digit82/kogpt2-summarization | 17 | null | transformers | 8,973 | Entry not found |
edugp/wav2vec2-xls-r-300m-36-tokens-with-lm-es | 717efd18b6d11e578de166c376ab1b5a7a9f5593 | 2022-03-23T18:28:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | edugp | null | edugp/wav2vec2-xls-r-300m-36-tokens-with-lm-es | 17 | null | transformers | 8,974 | ---
license: apache-2.0
language:
- es
tags:
- es
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-36-tokens-with-lm-es
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: 0.08677014042867702
- name: Test CER
type: cer
value: 0.02810974186831335
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 31.68
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 34.45
---
# Wav2Vec2-xls-r-300m-36-tokens-with-lm-es
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.0868
- Cer: 0.0281
This model consists of a Wav2Vec2 model with an additional KenLM 5-gram language model for CTC decoding.
The model is trained removing all characters that are not lower-case unaccented letters between `a-z` or the Spanish accented vowels `á`, `é`, `í`, `ó`, `ú` or the dieresis on the vowel `u` - `ü`.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.6512 | 0.07 | 400 | 0.5734 | 0.4325 |
| 0.4404 | 0.14 | 800 | 0.3329 | 0.3021 |
| 0.3465 | 0.22 | 1200 | 0.3067 | 0.2871 |
| 0.3214 | 0.29 | 1600 | 0.2808 | 0.2694 |
| 0.319 | 0.36 | 2000 | 0.2755 | 0.2677 |
| 0.3015 | 0.43 | 2400 | 0.2667 | 0.2437 |
| 0.3102 | 0.51 | 2800 | 0.2679 | 0.2475 |
| 0.2955 | 0.58 | 3200 | 0.2591 | 0.2421 |
| 0.292 | 0.65 | 3600 | 0.2547 | 0.2404 |
| 0.2961 | 0.72 | 4000 | 0.2824 | 0.2716 |
| 0.2906 | 0.8 | 4400 | 0.2531 | 0.2321 |
| 0.2886 | 0.87 | 4800 | 0.2668 | 0.2573 |
| 0.2934 | 0.94 | 5200 | 0.2608 | 0.2454 |
| 0.2844 | 1.01 | 5600 | 0.2414 | 0.2233 |
| 0.2649 | 1.09 | 6000 | 0.2412 | 0.2198 |
| 0.2587 | 1.16 | 6400 | 0.2432 | 0.2211 |
| 0.2631 | 1.23 | 6800 | 0.2414 | 0.2225 |
| 0.2584 | 1.3 | 7200 | 0.2489 | 0.2290 |
| 0.2588 | 1.37 | 7600 | 0.2341 | 0.2156 |
| 0.2581 | 1.45 | 8000 | 0.2323 | 0.2155 |
| 0.2603 | 1.52 | 8400 | 0.2423 | 0.2231 |
| 0.2527 | 1.59 | 8800 | 0.2381 | 0.2192 |
| 0.2588 | 1.66 | 9200 | 0.2323 | 0.2176 |
| 0.2543 | 1.74 | 9600 | 0.2391 | 0.2151 |
| 0.2528 | 1.81 | 10000 | 0.2295 | 0.2091 |
| 0.2535 | 1.88 | 10400 | 0.2317 | 0.2099 |
| 0.2501 | 1.95 | 10800 | 0.2225 | 0.2105 |
| 0.2441 | 2.03 | 11200 | 0.2356 | 0.2180 |
| 0.2275 | 2.1 | 11600 | 0.2341 | 0.2115 |
| 0.2281 | 2.17 | 12000 | 0.2269 | 0.2117 |
| 0.227 | 2.24 | 12400 | 0.2367 | 0.2125 |
| 0.2471 | 2.32 | 12800 | 0.2307 | 0.2090 |
| 0.229 | 2.39 | 13200 | 0.2231 | 0.2005 |
| 0.2325 | 2.46 | 13600 | 0.2243 | 0.2100 |
| 0.2314 | 2.53 | 14000 | 0.2252 | 0.2098 |
| 0.2309 | 2.6 | 14400 | 0.2269 | 0.2089 |
| 0.2267 | 2.68 | 14800 | 0.2155 | 0.1976 |
| 0.225 | 2.75 | 15200 | 0.2263 | 0.2067 |
| 0.2309 | 2.82 | 15600 | 0.2196 | 0.2041 |
| 0.225 | 2.89 | 16000 | 0.2212 | 0.2052 |
| 0.228 | 2.97 | 16400 | 0.2192 | 0.2028 |
| 0.2136 | 3.04 | 16800 | 0.2169 | 0.2042 |
| 0.2038 | 3.11 | 17200 | 0.2173 | 0.1998 |
| 0.2035 | 3.18 | 17600 | 0.2185 | 0.2002 |
| 0.207 | 3.26 | 18000 | 0.2358 | 0.2120 |
| 0.2102 | 3.33 | 18400 | 0.2213 | 0.2019 |
| 0.211 | 3.4 | 18800 | 0.2176 | 0.1980 |
| 0.2099 | 3.47 | 19200 | 0.2186 | 0.1960 |
| 0.2093 | 3.55 | 19600 | 0.2208 | 0.2016 |
| 0.2046 | 3.62 | 20000 | 0.2138 | 0.1960 |
| 0.2095 | 3.69 | 20400 | 0.2222 | 0.2023 |
| 0.2106 | 3.76 | 20800 | 0.2159 | 0.1964 |
| 0.2066 | 3.83 | 21200 | 0.2083 | 0.1931 |
| 0.2119 | 3.91 | 21600 | 0.2130 | 0.1957 |
| 0.2167 | 3.98 | 22000 | 0.2210 | 0.1987 |
| 0.1973 | 4.05 | 22400 | 0.2112 | 0.1930 |
| 0.1917 | 4.12 | 22800 | 0.2107 | 0.1891 |
| 0.1903 | 4.2 | 23200 | 0.2132 | 0.1911 |
| 0.1903 | 4.27 | 23600 | 0.2077 | 0.1883 |
| 0.1914 | 4.34 | 24000 | 0.2054 | 0.1901 |
| 0.1943 | 4.41 | 24400 | 0.2059 | 0.1885 |
| 0.1943 | 4.49 | 24800 | 0.2095 | 0.1899 |
| 0.1936 | 4.56 | 25200 | 0.2078 | 0.1879 |
| 0.1963 | 4.63 | 25600 | 0.2018 | 0.1884 |
| 0.1934 | 4.7 | 26000 | 0.2034 | 0.1872 |
| 0.2011 | 4.78 | 26400 | 0.2051 | 0.1896 |
| 0.1901 | 4.85 | 26800 | 0.2059 | 0.1858 |
| 0.1934 | 4.92 | 27200 | 0.2028 | 0.1832 |
| 0.191 | 4.99 | 27600 | 0.2046 | 0.1870 |
| 0.1775 | 5.07 | 28000 | 0.2081 | 0.1891 |
| 0.175 | 5.14 | 28400 | 0.2084 | 0.1904 |
| 0.19 | 5.21 | 28800 | 0.2086 | 0.1920 |
| 0.1798 | 5.28 | 29200 | 0.2079 | 0.1935 |
| 0.1765 | 5.35 | 29600 | 0.2145 | 0.1930 |
| 0.181 | 5.43 | 30000 | 0.2062 | 0.1918 |
| 0.1808 | 5.5 | 30400 | 0.2083 | 0.1875 |
| 0.1769 | 5.57 | 30800 | 0.2117 | 0.1895 |
| 0.1788 | 5.64 | 31200 | 0.2055 | 0.1857 |
| 0.181 | 5.72 | 31600 | 0.2057 | 0.1870 |
| 0.1781 | 5.79 | 32000 | 0.2053 | 0.1872 |
| 0.1852 | 5.86 | 32400 | 0.2077 | 0.1904 |
| 0.1832 | 5.93 | 32800 | 0.1979 | 0.1821 |
| 0.1758 | 6.01 | 33200 | 0.1957 | 0.1754 |
| 0.1611 | 6.08 | 33600 | 0.2028 | 0.1773 |
| 0.1606 | 6.15 | 34000 | 0.2018 | 0.1780 |
| 0.1702 | 6.22 | 34400 | 0.1977 | 0.1759 |
| 0.1649 | 6.3 | 34800 | 0.2073 | 0.1845 |
| 0.1641 | 6.37 | 35200 | 0.1947 | 0.1774 |
| 0.1703 | 6.44 | 35600 | 0.2009 | 0.1811 |
| 0.1716 | 6.51 | 36000 | 0.2091 | 0.1817 |
| 0.1732 | 6.58 | 36400 | 0.1942 | 0.1743 |
| 0.1642 | 6.66 | 36800 | 0.1930 | 0.1749 |
| 0.1685 | 6.73 | 37200 | 0.1962 | 0.1716 |
| 0.1647 | 6.8 | 37600 | 0.1977 | 0.1822 |
| 0.1647 | 6.87 | 38000 | 0.1917 | 0.1748 |
| 0.1667 | 6.95 | 38400 | 0.1948 | 0.1774 |
| 0.1647 | 7.02 | 38800 | 0.2018 | 0.1783 |
| 0.15 | 7.09 | 39200 | 0.2010 | 0.1796 |
| 0.1663 | 7.16 | 39600 | 0.1969 | 0.1731 |
| 0.1536 | 7.24 | 40000 | 0.1935 | 0.1726 |
| 0.1544 | 7.31 | 40400 | 0.2030 | 0.1799 |
| 0.1536 | 7.38 | 40800 | 0.1973 | 0.1772 |
| 0.1559 | 7.45 | 41200 | 0.1973 | 0.1763 |
| 0.1547 | 7.53 | 41600 | 0.2052 | 0.1782 |
| 0.1584 | 7.6 | 42000 | 0.1965 | 0.1737 |
| 0.1542 | 7.67 | 42400 | 0.1878 | 0.1725 |
| 0.1525 | 7.74 | 42800 | 0.1946 | 0.1750 |
| 0.1547 | 7.81 | 43200 | 0.1934 | 0.1691 |
| 0.1534 | 7.89 | 43600 | 0.1919 | 0.1711 |
| 0.1574 | 7.96 | 44000 | 0.1935 | 0.1745 |
| 0.1471 | 8.03 | 44400 | 0.1915 | 0.1689 |
| 0.1433 | 8.1 | 44800 | 0.1956 | 0.1719 |
| 0.1433 | 8.18 | 45200 | 0.1980 | 0.1720 |
| 0.1424 | 8.25 | 45600 | 0.1906 | 0.1681 |
| 0.1428 | 8.32 | 46000 | 0.1892 | 0.1649 |
| 0.1424 | 8.39 | 46400 | 0.1916 | 0.1698 |
| 0.1466 | 8.47 | 46800 | 0.1970 | 0.1739 |
| 0.1496 | 8.54 | 47200 | 0.1902 | 0.1662 |
| 0.1408 | 8.61 | 47600 | 0.1858 | 0.1649 |
| 0.1445 | 8.68 | 48000 | 0.1893 | 0.1648 |
| 0.1459 | 8.76 | 48400 | 0.1875 | 0.1686 |
| 0.1433 | 8.83 | 48800 | 0.1920 | 0.1673 |
| 0.1448 | 8.9 | 49200 | 0.1833 | 0.1631 |
| 0.1461 | 8.97 | 49600 | 0.1904 | 0.1693 |
| 0.1451 | 9.04 | 50000 | 0.1969 | 0.1661 |
| 0.1336 | 9.12 | 50400 | 0.1950 | 0.1674 |
| 0.1362 | 9.19 | 50800 | 0.1971 | 0.1685 |
| 0.1316 | 9.26 | 51200 | 0.1928 | 0.1648 |
| 0.132 | 9.33 | 51600 | 0.1908 | 0.1615 |
| 0.1301 | 9.41 | 52000 | 0.1842 | 0.1569 |
| 0.1322 | 9.48 | 52400 | 0.1892 | 0.1616 |
| 0.1391 | 9.55 | 52800 | 0.1956 | 0.1656 |
| 0.132 | 9.62 | 53200 | 0.1876 | 0.1598 |
| 0.1349 | 9.7 | 53600 | 0.1870 | 0.1624 |
| 0.1325 | 9.77 | 54000 | 0.1834 | 0.1586 |
| 0.1389 | 9.84 | 54400 | 0.1892 | 0.1647 |
| 0.1364 | 9.91 | 54800 | 0.1840 | 0.1597 |
| 0.1339 | 9.99 | 55200 | 0.1858 | 0.1626 |
| 0.1269 | 10.06 | 55600 | 0.1875 | 0.1619 |
| 0.1229 | 10.13 | 56000 | 0.1909 | 0.1619 |
| 0.1258 | 10.2 | 56400 | 0.1933 | 0.1631 |
| 0.1256 | 10.27 | 56800 | 0.1930 | 0.1640 |
| 0.1207 | 10.35 | 57200 | 0.1823 | 0.1585 |
| 0.1248 | 10.42 | 57600 | 0.1889 | 0.1596 |
| 0.1264 | 10.49 | 58000 | 0.1845 | 0.1584 |
| 0.1251 | 10.56 | 58400 | 0.1869 | 0.1588 |
| 0.1251 | 10.64 | 58800 | 0.1885 | 0.1613 |
| 0.1276 | 10.71 | 59200 | 0.1855 | 0.1575 |
| 0.1303 | 10.78 | 59600 | 0.1836 | 0.1597 |
| 0.1246 | 10.85 | 60000 | 0.1810 | 0.1573 |
| 0.1283 | 10.93 | 60400 | 0.1830 | 0.1581 |
| 0.1273 | 11.0 | 60800 | 0.1837 | 0.1619 |
| 0.1202 | 11.07 | 61200 | 0.1865 | 0.1588 |
| 0.119 | 11.14 | 61600 | 0.1889 | 0.1580 |
| 0.1179 | 11.22 | 62000 | 0.1884 | 0.1592 |
| 0.1187 | 11.29 | 62400 | 0.1824 | 0.1565 |
| 0.1198 | 11.36 | 62800 | 0.1848 | 0.1552 |
| 0.1154 | 11.43 | 63200 | 0.1866 | 0.1565 |
| 0.1211 | 11.51 | 63600 | 0.1862 | 0.1563 |
| 0.1177 | 11.58 | 64000 | 0.1816 | 0.1527 |
| 0.1156 | 11.65 | 64400 | 0.1834 | 0.1540 |
| 0.1144 | 11.72 | 64800 | 0.1837 | 0.1524 |
| 0.119 | 11.79 | 65200 | 0.1859 | 0.1538 |
| 0.1183 | 11.87 | 65600 | 0.1869 | 0.1558 |
| 0.122 | 11.94 | 66000 | 0.1853 | 0.1535 |
| 0.1197 | 12.01 | 66400 | 0.1871 | 0.1586 |
| 0.1096 | 12.08 | 66800 | 0.1838 | 0.1540 |
| 0.1074 | 12.16 | 67200 | 0.1915 | 0.1592 |
| 0.1084 | 12.23 | 67600 | 0.1845 | 0.1545 |
| 0.1097 | 12.3 | 68000 | 0.1904 | 0.1552 |
| 0.112 | 12.37 | 68400 | 0.1846 | 0.1578 |
| 0.1109 | 12.45 | 68800 | 0.1862 | 0.1549 |
| 0.1114 | 12.52 | 69200 | 0.1889 | 0.1552 |
| 0.1119 | 12.59 | 69600 | 0.1828 | 0.1530 |
| 0.1124 | 12.66 | 70000 | 0.1822 | 0.1540 |
| 0.1127 | 12.74 | 70400 | 0.1865 | 0.1589 |
| 0.1128 | 12.81 | 70800 | 0.1786 | 0.1498 |
| 0.1069 | 12.88 | 71200 | 0.1813 | 0.1522 |
| 0.1069 | 12.95 | 71600 | 0.1895 | 0.1558 |
| 0.1083 | 13.02 | 72000 | 0.1925 | 0.1557 |
| 0.1009 | 13.1 | 72400 | 0.1883 | 0.1522 |
| 0.1007 | 13.17 | 72800 | 0.1829 | 0.1480 |
| 0.1014 | 13.24 | 73200 | 0.1861 | 0.1510 |
| 0.0974 | 13.31 | 73600 | 0.1836 | 0.1486 |
| 0.1006 | 13.39 | 74000 | 0.1821 | 0.1462 |
| 0.0973 | 13.46 | 74400 | 0.1857 | 0.1484 |
| 0.1011 | 13.53 | 74800 | 0.1822 | 0.1471 |
| 0.1031 | 13.6 | 75200 | 0.1823 | 0.1489 |
| 0.1034 | 13.68 | 75600 | 0.1809 | 0.1452 |
| 0.0998 | 13.75 | 76000 | 0.1817 | 0.1490 |
| 0.1071 | 13.82 | 76400 | 0.1808 | 0.1501 |
| 0.1083 | 13.89 | 76800 | 0.1796 | 0.1475 |
| 0.1053 | 13.97 | 77200 | 0.1785 | 0.1470 |
| 0.0978 | 14.04 | 77600 | 0.1886 | 0.1495 |
| 0.094 | 14.11 | 78000 | 0.1854 | 0.1489 |
| 0.0915 | 14.18 | 78400 | 0.1854 | 0.1498 |
| 0.0947 | 14.25 | 78800 | 0.1888 | 0.1500 |
| 0.0939 | 14.33 | 79200 | 0.1885 | 0.1494 |
| 0.0973 | 14.4 | 79600 | 0.1877 | 0.1466 |
| 0.0946 | 14.47 | 80000 | 0.1904 | 0.1494 |
| 0.0931 | 14.54 | 80400 | 0.1815 | 0.1473 |
| 0.0958 | 14.62 | 80800 | 0.1905 | 0.1508 |
| 0.0982 | 14.69 | 81200 | 0.1881 | 0.1511 |
| 0.0963 | 14.76 | 81600 | 0.1823 | 0.1449 |
| 0.0943 | 14.83 | 82000 | 0.1782 | 0.1458 |
| 0.0981 | 14.91 | 82400 | 0.1795 | 0.1465 |
| 0.0995 | 14.98 | 82800 | 0.1811 | 0.1484 |
| 0.0909 | 15.05 | 83200 | 0.1822 | 0.1450 |
| 0.0872 | 15.12 | 83600 | 0.1890 | 0.1466 |
| 0.0878 | 15.2 | 84000 | 0.1859 | 0.1468 |
| 0.0884 | 15.27 | 84400 | 0.1825 | 0.1429 |
| 0.0871 | 15.34 | 84800 | 0.1816 | 0.1438 |
| 0.0883 | 15.41 | 85200 | 0.1817 | 0.1433 |
| 0.0844 | 15.48 | 85600 | 0.1821 | 0.1412 |
| 0.0843 | 15.56 | 86000 | 0.1863 | 0.1411 |
| 0.0805 | 15.63 | 86400 | 0.1863 | 0.1441 |
| 0.085 | 15.7 | 86800 | 0.1808 | 0.1440 |
| 0.0848 | 15.77 | 87200 | 0.1808 | 0.1421 |
| 0.0844 | 15.85 | 87600 | 0.1841 | 0.1406 |
| 0.082 | 15.92 | 88000 | 0.1850 | 0.1442 |
| 0.0854 | 15.99 | 88400 | 0.1773 | 0.1426 |
| 0.0835 | 16.06 | 88800 | 0.1888 | 0.1436 |
| 0.0789 | 16.14 | 89200 | 0.1922 | 0.1434 |
| 0.081 | 16.21 | 89600 | 0.1864 | 0.1448 |
| 0.0799 | 16.28 | 90000 | 0.1902 | 0.1428 |
| 0.0848 | 16.35 | 90400 | 0.1873 | 0.1422 |
| 0.084 | 16.43 | 90800 | 0.1835 | 0.1421 |
| 0.083 | 16.5 | 91200 | 0.1878 | 0.1390 |
| 0.0794 | 16.57 | 91600 | 0.1877 | 0.1398 |
| 0.0807 | 16.64 | 92000 | 0.1800 | 0.1385 |
| 0.0829 | 16.71 | 92400 | 0.1910 | 0.1434 |
| 0.0839 | 16.79 | 92800 | 0.1843 | 0.1381 |
| 0.0815 | 16.86 | 93200 | 0.1812 | 0.1365 |
| 0.0831 | 16.93 | 93600 | 0.1889 | 0.1383 |
| 0.0803 | 17.0 | 94000 | 0.1902 | 0.1403 |
| 0.0724 | 17.08 | 94400 | 0.1934 | 0.1380 |
| 0.0734 | 17.15 | 94800 | 0.1865 | 0.1394 |
| 0.0739 | 17.22 | 95200 | 0.1876 | 0.1395 |
| 0.0758 | 17.29 | 95600 | 0.1938 | 0.1411 |
| 0.0733 | 17.37 | 96000 | 0.1933 | 0.1410 |
| 0.077 | 17.44 | 96400 | 0.1848 | 0.1385 |
| 0.0754 | 17.51 | 96800 | 0.1876 | 0.1407 |
| 0.0746 | 17.58 | 97200 | 0.1863 | 0.1371 |
| 0.0732 | 17.66 | 97600 | 0.1927 | 0.1401 |
| 0.0746 | 17.73 | 98000 | 0.1874 | 0.1390 |
| 0.0755 | 17.8 | 98400 | 0.1853 | 0.1381 |
| 0.0724 | 17.87 | 98800 | 0.1849 | 0.1365 |
| 0.0716 | 17.94 | 99200 | 0.1848 | 0.1380 |
| 0.074 | 18.02 | 99600 | 0.1891 | 0.1362 |
| 0.0687 | 18.09 | 100000 | 0.1974 | 0.1357 |
| 0.0651 | 18.16 | 100400 | 0.1942 | 0.1353 |
| 0.0672 | 18.23 | 100800 | 0.1823 | 0.1363 |
| 0.0671 | 18.31 | 101200 | 0.1959 | 0.1357 |
| 0.0684 | 18.38 | 101600 | 0.1959 | 0.1374 |
| 0.0688 | 18.45 | 102000 | 0.1904 | 0.1353 |
| 0.0696 | 18.52 | 102400 | 0.1926 | 0.1364 |
| 0.0661 | 18.6 | 102800 | 0.1905 | 0.1351 |
| 0.0684 | 18.67 | 103200 | 0.1955 | 0.1343 |
| 0.0712 | 18.74 | 103600 | 0.1873 | 0.1353 |
| 0.0701 | 18.81 | 104000 | 0.1822 | 0.1354 |
| 0.0688 | 18.89 | 104400 | 0.1905 | 0.1373 |
| 0.0695 | 18.96 | 104800 | 0.1879 | 0.1335 |
| 0.0661 | 19.03 | 105200 | 0.2005 | 0.1351 |
| 0.0644 | 19.1 | 105600 | 0.1972 | 0.1351 |
| 0.0627 | 19.18 | 106000 | 0.1956 | 0.1340 |
| 0.0633 | 19.25 | 106400 | 0.1962 | 0.1340 |
| 0.0629 | 19.32 | 106800 | 0.1937 | 0.1342 |
| 0.0636 | 19.39 | 107200 | 0.1905 | 0.1355 |
| 0.0631 | 19.46 | 107600 | 0.1917 | 0.1326 |
| 0.0624 | 19.54 | 108000 | 0.1977 | 0.1355 |
| 0.0621 | 19.61 | 108400 | 0.1941 | 0.1345 |
| 0.0635 | 19.68 | 108800 | 0.1949 | 0.1336 |
| 0.063 | 19.75 | 109200 | 0.1919 | 0.1317 |
| 0.0636 | 19.83 | 109600 | 0.1928 | 0.1317 |
| 0.0612 | 19.9 | 110000 | 0.1923 | 0.1314 |
| 0.0636 | 19.97 | 110400 | 0.1923 | 0.1343 |
| 0.0581 | 20.04 | 110800 | 0.2036 | 0.1332 |
| 0.0573 | 20.12 | 111200 | 0.2007 | 0.1315 |
| 0.0566 | 20.19 | 111600 | 0.1974 | 0.1319 |
| 0.0589 | 20.26 | 112000 | 0.1958 | 0.1322 |
| 0.0577 | 20.33 | 112400 | 0.1946 | 0.1307 |
| 0.0587 | 20.41 | 112800 | 0.1957 | 0.1295 |
| 0.0588 | 20.48 | 113200 | 0.2013 | 0.1306 |
| 0.0594 | 20.55 | 113600 | 0.2010 | 0.1312 |
| 0.0602 | 20.62 | 114000 | 0.1993 | 0.1314 |
| 0.0583 | 20.69 | 114400 | 0.1931 | 0.1297 |
| 0.059 | 20.77 | 114800 | 0.1974 | 0.1305 |
| 0.0566 | 20.84 | 115200 | 0.1979 | 0.1294 |
| 0.0588 | 20.91 | 115600 | 0.1944 | 0.1292 |
| 0.0569 | 20.98 | 116000 | 0.1974 | 0.1309 |
| 0.0554 | 21.06 | 116400 | 0.2080 | 0.1307 |
| 0.0542 | 21.13 | 116800 | 0.2056 | 0.1301 |
| 0.0532 | 21.2 | 117200 | 0.2027 | 0.1309 |
| 0.0535 | 21.27 | 117600 | 0.1970 | 0.1287 |
| 0.0533 | 21.35 | 118000 | 0.2124 | 0.1310 |
| 0.0546 | 21.42 | 118400 | 0.2043 | 0.1300 |
| 0.0544 | 21.49 | 118800 | 0.2056 | 0.1281 |
| 0.0562 | 21.56 | 119200 | 0.1986 | 0.1273 |
| 0.0549 | 21.64 | 119600 | 0.2075 | 0.1283 |
| 0.0522 | 21.71 | 120000 | 0.2058 | 0.1278 |
| 0.052 | 21.78 | 120400 | 0.2057 | 0.1280 |
| 0.0563 | 21.85 | 120800 | 0.1966 | 0.1295 |
| 0.0546 | 21.92 | 121200 | 0.2002 | 0.1285 |
| 0.0539 | 22.0 | 121600 | 0.1996 | 0.1279 |
| 0.0504 | 22.07 | 122000 | 0.2077 | 0.1273 |
| 0.0602 | 22.14 | 122400 | 0.2055 | 0.1278 |
| 0.0503 | 22.21 | 122800 | 0.2037 | 0.1283 |
| 0.0496 | 22.29 | 123200 | 0.2109 | 0.1279 |
| 0.0523 | 22.36 | 123600 | 0.2068 | 0.1276 |
| 0.0508 | 22.43 | 124000 | 0.2051 | 0.1257 |
| 0.0505 | 22.5 | 124400 | 0.2056 | 0.1269 |
| 0.05 | 22.58 | 124800 | 0.1995 | 0.1268 |
| 0.0496 | 22.65 | 125200 | 0.2022 | 0.1290 |
| 0.0484 | 22.72 | 125600 | 0.2095 | 0.1291 |
| 0.0518 | 22.79 | 126000 | 0.2132 | 0.1271 |
| 0.0499 | 22.87 | 126400 | 0.2124 | 0.1263 |
| 0.0485 | 22.94 | 126800 | 0.2092 | 0.1252 |
| 0.0476 | 23.01 | 127200 | 0.2138 | 0.1256 |
| 0.0467 | 23.08 | 127600 | 0.2119 | 0.1256 |
| 0.048 | 23.15 | 128000 | 0.2138 | 0.1269 |
| 0.0461 | 23.23 | 128400 | 0.2036 | 0.1244 |
| 0.0467 | 23.3 | 128800 | 0.2163 | 0.1255 |
| 0.0475 | 23.37 | 129200 | 0.2180 | 0.1258 |
| 0.0468 | 23.44 | 129600 | 0.2129 | 0.1245 |
| 0.0456 | 23.52 | 130000 | 0.2122 | 0.1250 |
| 0.0458 | 23.59 | 130400 | 0.2157 | 0.1257 |
| 0.0453 | 23.66 | 130800 | 0.2088 | 0.1242 |
| 0.045 | 23.73 | 131200 | 0.2144 | 0.1247 |
| 0.0469 | 23.81 | 131600 | 0.2113 | 0.1246 |
| 0.0453 | 23.88 | 132000 | 0.2151 | 0.1234 |
| 0.0471 | 23.95 | 132400 | 0.2130 | 0.1229 |
| 0.0443 | 24.02 | 132800 | 0.2150 | 0.1225 |
| 0.0446 | 24.1 | 133200 | 0.2166 | 0.1235 |
| 0.0435 | 24.17 | 133600 | 0.2143 | 0.1222 |
| 0.0407 | 24.24 | 134000 | 0.2175 | 0.1218 |
| 0.0421 | 24.31 | 134400 | 0.2147 | 0.1227 |
| 0.0435 | 24.38 | 134800 | 0.2193 | 0.1233 |
| 0.0414 | 24.46 | 135200 | 0.2172 | 0.1225 |
| 0.0419 | 24.53 | 135600 | 0.2156 | 0.1225 |
| 0.0419 | 24.6 | 136000 | 0.2143 | 0.1235 |
| 0.0423 | 24.67 | 136400 | 0.2179 | 0.1226 |
| 0.0423 | 24.75 | 136800 | 0.2144 | 0.1221 |
| 0.0424 | 24.82 | 137200 | 0.2135 | 0.1210 |
| 0.0419 | 24.89 | 137600 | 0.2166 | 0.1218 |
| 0.0408 | 24.96 | 138000 | 0.2151 | 0.1211 |
| 0.0433 | 25.04 | 138400 | 0.2174 | 0.1214 |
| 0.0395 | 25.11 | 138800 | 0.2242 | 0.1210 |
| 0.0403 | 25.18 | 139200 | 0.2219 | 0.1215 |
| 0.0413 | 25.25 | 139600 | 0.2225 | 0.1207 |
| 0.0389 | 25.33 | 140000 | 0.2187 | 0.1202 |
| 0.0395 | 25.4 | 140400 | 0.2244 | 0.1204 |
| 0.0398 | 25.47 | 140800 | 0.2263 | 0.1199 |
| 0.0386 | 25.54 | 141200 | 0.2165 | 0.1187 |
| 0.0396 | 25.61 | 141600 | 0.2171 | 0.1187 |
| 0.0406 | 25.69 | 142000 | 0.2199 | 0.1190 |
| 0.0404 | 25.76 | 142400 | 0.2224 | 0.1190 |
| 0.0391 | 25.83 | 142800 | 0.2230 | 0.1185 |
| 0.04 | 25.9 | 143200 | 0.2208 | 0.1200 |
| 0.0396 | 25.98 | 143600 | 0.2179 | 0.1191 |
| 0.0353 | 26.05 | 144000 | 0.2285 | 0.1178 |
| 0.0368 | 26.12 | 144400 | 0.2273 | 0.1186 |
| 0.0393 | 26.19 | 144800 | 0.2247 | 0.1196 |
| 0.0368 | 26.27 | 145200 | 0.2314 | 0.1181 |
| 0.0373 | 26.34 | 145600 | 0.2215 | 0.1188 |
| 0.038 | 26.41 | 146000 | 0.2262 | 0.1180 |
| 0.0363 | 26.48 | 146400 | 0.2250 | 0.1172 |
| 0.0365 | 26.56 | 146800 | 0.2299 | 0.1174 |
| 0.0382 | 26.63 | 147200 | 0.2292 | 0.1165 |
| 0.0365 | 26.7 | 147600 | 0.2282 | 0.1165 |
| 0.0371 | 26.77 | 148000 | 0.2276 | 0.1172 |
| 0.0365 | 26.85 | 148400 | 0.2280 | 0.1173 |
| 0.0376 | 26.92 | 148800 | 0.2248 | 0.1164 |
| 0.0365 | 26.99 | 149200 | 0.2230 | 0.1158 |
| 0.0343 | 27.06 | 149600 | 0.2300 | 0.1157 |
| 0.0354 | 27.13 | 150000 | 0.2298 | 0.1166 |
| 0.0333 | 27.21 | 150400 | 0.2307 | 0.1158 |
| 0.0353 | 27.28 | 150800 | 0.2300 | 0.1157 |
| 0.036 | 27.35 | 151200 | 0.2335 | 0.1160 |
| 0.0343 | 27.42 | 151600 | 0.2324 | 0.1155 |
| 0.0361 | 27.5 | 152000 | 0.2300 | 0.1150 |
| 0.0352 | 27.57 | 152400 | 0.2279 | 0.1146 |
| 0.0353 | 27.64 | 152800 | 0.2307 | 0.1149 |
| 0.0342 | 27.71 | 153200 | 0.2315 | 0.1152 |
| 0.0345 | 27.79 | 153600 | 0.2290 | 0.1146 |
| 0.034 | 27.86 | 154000 | 0.2319 | 0.1141 |
| 0.0347 | 27.93 | 154400 | 0.2312 | 0.1144 |
| 0.0338 | 28.0 | 154800 | 0.2328 | 0.1146 |
| 0.0347 | 28.08 | 155200 | 0.2352 | 0.1151 |
| 0.033 | 28.15 | 155600 | 0.2337 | 0.1142 |
| 0.0336 | 28.22 | 156000 | 0.2345 | 0.1141 |
| 0.0337 | 28.29 | 156400 | 0.2315 | 0.1143 |
| 0.0314 | 28.36 | 156800 | 0.2353 | 0.1140 |
| 0.0333 | 28.44 | 157200 | 0.2338 | 0.1146 |
| 0.0317 | 28.51 | 157600 | 0.2345 | 0.1139 |
| 0.0326 | 28.58 | 158000 | 0.2336 | 0.1143 |
| 0.033 | 28.65 | 158400 | 0.2352 | 0.1137 |
| 0.0325 | 28.73 | 158800 | 0.2312 | 0.1130 |
| 0.0321 | 28.8 | 159200 | 0.2338 | 0.1133 |
| 0.0334 | 28.87 | 159600 | 0.2335 | 0.1130 |
| 0.0317 | 28.94 | 160000 | 0.2340 | 0.1126 |
| 0.0321 | 29.02 | 160400 | 0.2349 | 0.1126 |
| 0.032 | 29.09 | 160800 | 0.2369 | 0.1127 |
| 0.0312 | 29.16 | 161200 | 0.2363 | 0.1124 |
| 0.0303 | 29.23 | 161600 | 0.2363 | 0.1123 |
| 0.0322 | 29.31 | 162000 | 0.2354 | 0.1124 |
| 0.03 | 29.38 | 162400 | 0.2360 | 0.1122 |
| 0.0299 | 29.45 | 162800 | 0.2378 | 0.1124 |
| 0.0313 | 29.52 | 163200 | 0.2377 | 0.1120 |
| 0.0299 | 29.59 | 163600 | 0.2367 | 0.1124 |
| 0.0313 | 29.67 | 164000 | 0.2380 | 0.1120 |
| 0.031 | 29.74 | 164400 | 0.2369 | 0.1120 |
| 0.0327 | 29.81 | 164800 | 0.2358 | 0.1117 |
| 0.0316 | 29.88 | 165200 | 0.2358 | 0.1118 |
| 0.0307 | 29.96 | 165600 | 0.2362 | 0.1118 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
emre/distilbert-tr-q-a | 1fa82acdc35b00ae008477b02f181cda4e83b29a | 2022-02-17T13:40:11.000Z | [
"pytorch",
"bert",
"question-answering",
"tr",
"dataset:TQuAD",
"transformers",
"loodos-bert-base",
"TQuAD",
"autotrain_compatible"
] | question-answering | false | emre | null | emre/distilbert-tr-q-a | 17 | null | transformers | 8,975 | ---
language: tr
tags:
- question-answering
- loodos-bert-base
- TQuAD
- tr
datasets:
- TQuAD
---
# Turkish SQuAD Model : Question Answering
Fine-tuned Loodos-Turkish-Bert-Model for Question-Answering problem with TQuAD dataset
* Loodos-BERT-base: https://huggingface.co/loodos/bert-base-turkish-uncased
* TQuAD dataset: https://github.com/TQuad/turkish-nlp-qa-dataset
# Training Code
```
!python3 Turkish-QA.py \
--model_type bert \
--model_name_or_path loodos/bert-base-turkish-uncased
--do_train \
--do_eval \
--train_file trainQ.json \
--predict_file dev1.json \
--per_gpu_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--output_dir "./model"
```
# Example Usage
> Load Model
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("emre/distilbert-tr-q-a")
model = AutoModelForQuestionAnswering.from_pretrained("emre/distilbert-tr-q-a")
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
```
> Apply the model
```
def ask(question,context):
temp = nlp(question=question, context=context)
start_idx = temp["start"]
end_idx = temp["end"]
return context[start_idx:end_idx]
izmir="İzmir, Türkiye'de Ege Bölgesi'nde yer alan şehir ve ülkenin 81 ilinden biridir. Ülkenin nüfus bakımından en kalabalık üçüncü şehridir. Ekonomik, tarihi ve sosyo-kültürel açıdan önde gelen şehirlerden biridir. Nüfusu 2021 itibarıyla 4.425.789 kişidir. Yüzölçümü olarak ülkenin yirmi üçüncü büyük ilidir."
soru1 = "İzmir'in nüfusu kaçtır?"
print(ask(soru1,izmir))
soru2 = "İzmir hangi bölgede bulunur?"
print(ask(soru2,izmir))
``` |
gagan3012/keytotext-gpt | 90757bac25de2d1205a98b04fb8697d6fbd5ab0d | 2021-05-21T16:04:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | gagan3012 | null | gagan3012/keytotext-gpt | 17 | null | transformers | 8,976 | Entry not found |
gchhablani/fnet-base-finetuned-wnli | ad9d8ca9950f2ae4480d2fcb5d9dbefb2bfb4bae | 2021-09-20T09:07:59.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-base-finetuned-wnli | 17 | null | transformers | 8,977 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5492957746478874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-wnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6887
- Accuracy: 0.5493
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7052 | 1.0 | 40 | 0.6902 | 0.5634 |
| 0.6957 | 2.0 | 80 | 0.7013 | 0.4366 |
| 0.6898 | 3.0 | 120 | 0.6898 | 0.5352 |
| 0.6958 | 4.0 | 160 | 0.6874 | 0.5634 |
| 0.6982 | 5.0 | 200 | 0.6887 | 0.5493 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ghadeermobasher/BC4CHEMD-Modified_scibert_scivocab_cased | 9cd7ec2af56ffa05d042ff3c855979b8007bf0e6 | 2022-01-24T03:08:45.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Modified_scibert_scivocab_cased | 17 | null | transformers | 8,978 | Entry not found |
ghadeermobasher/BC4CHEMD_Imbalancedscibert_scivocab_cased | d65763bca6132fa1599e0420e0aba46a4b0de34e | 2022-01-24T03:12:41.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD_Imbalancedscibert_scivocab_cased | 17 | null | transformers | 8,979 | Entry not found |
giganticode/bert-base-code_comments | fc469b2f87912e102a1facfde637d445025d2521 | 2021-10-25T12:59:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | giganticode | null | giganticode/bert-base-code_comments | 17 | null | transformers | 8,980 | Entry not found |
gsarti/covidbert-nli | 477f49d112b00be886f7f5ce6fdf9f8cd73c9bd9 | 2021-05-19T17:48:24.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | gsarti | null | gsarti/covidbert-nli | 17 | null | transformers | 8,981 | # CovidBERT-NLI
This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses.
The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**.
Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba)
**Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
**Training time**: ~6 hours on the NVIDIA Tesla P100 GPU provided in Kaggle Notebooks.
**Parameters**:
| Parameter | Value |
|------------------|-------|
| Batch size | 64 |
| Training steps | 23000 |
| Warmup steps | 1450 |
| Lowercasing | True |
| Max. Seq. Length | 128 |
**Performances**: The performance was evaluated on the test portion of the [STS dataset](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) using Spearman rank correlation and compared to the performances of similar models obtained with the same procedure to verify its performances.
| Model | Score |
|-------------------------------|-------------|
| `covidbert-nli` (this) | 67.52 |
| `gsarti/biobert-nli` | 73.40 |
| `gsarti/scibert-nli` | 74.50 |
| `bert-base-nli-mean-tokens`[2]| 77.12 |
An example usage for similarity-based scientific paper retrieval is provided in the [Covid-19 Semantic Browser](https://github.com/gsarti/covid-papers-browser) repository.
**References:**
[1] A. Conneau et al., [Supervised Learning of Universal Sentence Representations from Natural Language Inference Data](https://www.aclweb.org/anthology/D17-1070/)
[2] N. Reimers et I. Gurevych, [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://www.aclweb.org/anthology/D19-1410/)
|
hf-internal-testing/tiny-random-megatron-bert | b71486855a524afe43c166e689b4cb524e3ad04a | 2021-07-24T15:19:56.000Z | [
"pytorch",
"megatron-bert",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-megatron-bert | 17 | null | transformers | 8,982 | Entry not found |
huaen/question_detection | 575041aae53f7da3e32f2bf1c7717029441e435a | 2021-10-24T12:18:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | huaen | null | huaen/question_detection | 17 | 1 | transformers | 8,983 | Entry not found |
huggingartists/nirvana | b1a39ba57e8f8e4056a020229fd37eba2778910f | 2022-02-21T01:51:05.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/nirvana",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/nirvana | 17 | null | transformers | 8,984 | ---
language: en
datasets:
- huggingartists/nirvana
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/4c1373962cfc3a668a3e30da9a76a34c.640x640x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nirvana</div>
<a href="https://genius.com/artists/nirvana">
<div style="text-align: center; font-size: 14px;">@nirvana</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Nirvana.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/nirvana).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/nirvana")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1bj9eav1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Nirvana's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3vzztlsq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3vzztlsq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/nirvana')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/nirvana")
model = AutoModelWithLMHead.from_pretrained("huggingartists/nirvana")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/radiohead | e0ee1924e14445e3a9d87c44efba057433172151 | 2022-03-09T09:46:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/radiohead",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/radiohead | 17 | null | transformers | 8,985 | ---
language: en
datasets:
- huggingartists/radiohead
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/593c69b2e4bb8eb47801ce1952c5d30b.600x600x184.gif')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Radiohead</div>
<a href="https://genius.com/artists/radiohead">
<div style="text-align: center; font-size: 14px;">@radiohead</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Radiohead.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/radiohead).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/radiohead")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/35vxvq9n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Radiohead's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2bulf32i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2bulf32i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/radiohead')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/radiohead")
model = AutoModelWithLMHead.from_pretrained("huggingartists/radiohead")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/_buddha_quotes | 95978ab82ecc05d3b16f4d6be23d46ec3c6b3215 | 2021-05-21T16:55:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/_buddha_quotes | 17 | 1 | transformers | 8,986 | ---
language: en
thumbnail: https://www.huggingtweets.com/_buddha_quotes/1609541828144/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2409590248/73g1ywcwdlyd8ls4wa4g_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">The Buddha 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@_buddha_quotes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@_buddha_quotes's tweets](https://twitter.com/_buddha_quotes).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3200</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>0</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3200</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3m2s8fe6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_buddha_quotes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/j1ixyq8z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/j1ixyq8z/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/_buddha_quotes'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/afinchwrites | eb35f213848b7ccf0f420ada35646c19def5f0f3 | 2021-05-21T17:47:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/afinchwrites | 17 | null | transformers | 8,987 | ---
language: en
thumbnail: https://www.huggingtweets.com/afinchwrites/1617758836679/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1250126825109544960/8ndvxL2E_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ashley Finch 🔞 🤖 AI Bot </div>
<div style="font-size: 15px">@afinchwrites bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@afinchwrites's tweets](https://twitter.com/afinchwrites).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3214 |
| Retweets | 1236 |
| Short tweets | 265 |
| Tweets kept | 1713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bwfztuv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afinchwrites's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39vriclf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39vriclf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/afinchwrites')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/alotoforanges | b09958ea4150c990a50747974e00e815872efb1a | 2021-05-21T18:26:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/alotoforanges | 17 | null | transformers | 8,988 | ---
language: en
thumbnail: https://www.huggingtweets.com/alotoforanges/1616898775163/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320844146664460288/W09Z-oPC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">April 🤖 AI Bot </div>
<div style="font-size: 15px">@alotoforanges bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@alotoforanges's tweets](https://twitter.com/alotoforanges).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 186 |
| Short tweets | 552 |
| Tweets kept | 2502 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rgdnomb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alotoforanges's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1e1tznc6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1e1tznc6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alotoforanges')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/broschistocks | 01a3209c6719777ae0d9e111b5b29a29a66bc493 | 2021-05-21T21:12:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/broschistocks | 17 | null | transformers | 8,989 | ---
language: en
thumbnail: https://www.huggingtweets.com/broschistocks/1614095969958/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1159519240757624838/LEJGJWNz_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">dessicant gourmand 🤖 AI Bot </div>
<div style="font-size: 15px">@broschistocks bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@broschistocks's tweets](https://twitter.com/broschistocks).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 664 |
| Retweets | 331 |
| Short tweets | 66 |
| Tweets kept | 267 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8qbbqieq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @broschistocks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pnoc5bl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pnoc5bl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/broschistocks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cushbomb | c83b0aa44a4026d03943775677bed9ba44f23a69 | 2021-05-21T23:53:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cushbomb | 17 | null | transformers | 8,990 | ---
language: en
thumbnail: https://www.huggingtweets.com/cushbomb/1614099144410/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1352838562622791682/X3YGO4bN_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">matt christman 🤖 AI Bot </div>
<div style="font-size: 15px">@cushbomb bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cushbomb's tweets](https://twitter.com/cushbomb).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3222 |
| Retweets | 161 |
| Short tweets | 701 |
| Tweets kept | 2360 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c6zjdd90/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cushbomb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w2qoeb19) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w2qoeb19/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cushbomb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cyberbully66 | f9531b85ac532d7443cc81e981eea8b715dc7db8 | 2021-05-21T23:59:40.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cyberbully66 | 17 | null | transformers | 8,991 | ---
language: en
thumbnail: https://www.huggingtweets.com/cyberbully66/1616851006786/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375463332732403714/TP6hwUxm_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">evil succubus 🤖 AI Bot </div>
<div style="font-size: 15px">@cyberbully66 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cyberbully66's tweets](https://twitter.com/cyberbully66).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3195 |
| Retweets | 397 |
| Short tweets | 570 |
| Tweets kept | 2228 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2c5t9ev6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cyberbully66's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e4ld23gl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e4ld23gl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cyberbully66')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/demirenjun | 7ec6ba4103cddcb166d6b0c77e3fcfc2b5d9201c | 2021-05-22T01:19:12.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/demirenjun | 17 | null | transformers | 8,992 | ---
language: en
thumbnail: https://www.huggingtweets.com/demirenjun/1617917661023/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1354964611586547715/WIIHy349_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">rj bday (season) 🦜🍓💝 🤖 AI Bot </div>
<div style="font-size: 15px">@demirenjun bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@demirenjun's tweets](https://twitter.com/demirenjun).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3199 |
| Retweets | 800 |
| Short tweets | 384 |
| Tweets kept | 2015 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bdlmgyb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @demirenjun's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ck8cxvw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ck8cxvw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/demirenjun')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/elhotzo | 47618a248dd776dfde4eac1196f4c0f6798a0e4f | 2021-05-22T02:51:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/elhotzo | 17 | null | transformers | 8,993 | ---
language: en
thumbnail: https://www.huggingtweets.com/elhotzo/1613422967587/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/954348751799373825/_rztgdVC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">E L H O T Z O 🤖 AI Bot </div>
<div style="font-size: 15px">@elhotzo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@elhotzo's tweets](https://twitter.com/elhotzo).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3222 |
| Retweets | 43 |
| Short tweets | 286 |
| Tweets kept | 2893 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v5cgxsz8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elhotzo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1x8wqa37) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1x8wqa37/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elhotzo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/hoffridder | 5d2b84b9f6ee14621d4080bfed04d01bb41fe3e4 | 2021-05-22T07:00:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/hoffridder | 17 | null | transformers | 8,994 | ---
language: en
thumbnail: https://www.huggingtweets.com/hoffridder/1617780877643/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1378566172946395136/MdKVnvRJ_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ridderhoff 🤖 AI Bot </div>
<div style="font-size: 15px">@hoffridder bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@hoffridder's tweets](https://twitter.com/hoffridder).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 16 |
| Short tweets | 443 |
| Tweets kept | 2791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1piyzy7v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hoffridder's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/365i3db0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/365i3db0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hoffridder')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/karpathy | c69b6b6058bdceac35082062d64d4681bb89c67c | 2021-05-22T10:32:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/karpathy | 17 | 1 | transformers | 8,995 | ---
language: en
thumbnail: https://www.huggingtweets.com/karpathy/1607705820861/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1296667294148382721/9Pr6XrPB_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Andrej Karpathy 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@karpathy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@karpathy's tweets](https://twitter.com/karpathy).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3217</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>416</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>89</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2712</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2m4p0ith/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @karpathy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7mm2jhgw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7mm2jhgw/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/karpathy'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/markprzepiora | f11d2881a1819bbf0dbe2fcdbae0ee74610d0b1f | 2021-05-22T13:27:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/markprzepiora | 17 | null | transformers | 8,996 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1287851691874717696/za-omADx_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">M⦁͘͜⦁̸̀͘⦁͘ P⦁̸̀͘⦁͏⦁͘͜⦁͟͞⦁⦁͘⦁͢͜͜⦁́ 🤖 AI Bot </div>
<div style="font-size: 15px">@markprzepiora bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@markprzepiora's tweets](https://twitter.com/markprzepiora).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1093 |
| Retweets | 55 |
| Short tweets | 100 |
| Tweets kept | 938 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e9iu7ts/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @markprzepiora's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9mk8jcf5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9mk8jcf5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/markprzepiora')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/premiles_ | 43a693557ec8b844fe221603fdc6429b599acb2e | 2021-05-22T19:24:17.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/premiles_ | 17 | null | transformers | 8,997 | ---
language: en
thumbnail: https://www.huggingtweets.com/premiles_/1616685758725/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1328826791331586048/GG3K46Cu_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Wonka Bourdain - FKA Irish 🤖 AI Bot </div>
<div style="font-size: 15px">@premiles_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@premiles_'s tweets](https://twitter.com/premiles_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3219 |
| Retweets | 538 |
| Short tweets | 505 |
| Tweets kept | 2176 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2bdtvlgr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @premiles_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n0ejc55) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n0ejc55/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/premiles_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/slimepriestess | 041f0a38101219268a3b5b55a2d737bf55d5d90d | 2021-05-22T23:04:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/slimepriestess | 17 | null | transformers | 8,998 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1319135470656180224/cxISAFko_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Octavia 🤖 AI Bot </div>
<div style="font-size: 15px">@slimepriestess bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@slimepriestess's tweets](https://twitter.com/slimepriestess).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 201 |
| Retweets | 23 |
| Short tweets | 16 |
| Tweets kept | 162 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1f2gufmd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @slimepriestess's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3h5af3aw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3h5af3aw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/slimepriestess')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/thcphilosopher | d3f38f7fcf785dc691db08748a242b7bb698b53e | 2021-05-23T01:18:12.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/thcphilosopher | 17 | null | transformers | 8,999 | ---
language: en
thumbnail: https://www.huggingtweets.com/thcphilosopher/1616728158308/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320456433176031232/S-_vUTA9_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">The High Philosopher 🤖 AI Bot </div>
<div style="font-size: 15px">@thcphilosopher bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@thcphilosopher's tweets](https://twitter.com/thcphilosopher).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 371 |
| Short tweets | 582 |
| Tweets kept | 2264 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cugs1hg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thcphilosopher's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/z32eiyry) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/z32eiyry/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thcphilosopher')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.