modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tbosse/bert-base-german-cased-finetuned-subj_v4_5Epoch | 85f2c775712cbba3e669e27e7acfb8c708f4310c | 2022-04-07T18:32:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v4_5Epoch | 9 | null | transformers | 12,500 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v4_5Epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v4_5Epoch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3180
- Precision: 0.6640
- Recall: 0.5985
- F1: 0.6296
- Accuracy: 0.8761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 32 | 0.4471 | 0.4951 | 0.1861 | 0.2706 | 0.8145 |
| No log | 2.0 | 64 | 0.3823 | 0.6096 | 0.4161 | 0.4946 | 0.8472 |
| No log | 3.0 | 96 | 0.3428 | 0.6744 | 0.5292 | 0.5930 | 0.8698 |
| No log | 4.0 | 128 | 0.3216 | 0.6571 | 0.5876 | 0.6204 | 0.8730 |
| No log | 5.0 | 160 | 0.3180 | 0.6640 | 0.5985 | 0.6296 | 0.8761 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Damith/AraELECTRA-discriminator-QuranQA | 8ebe071858657a6bd69924908ce0ede3c888fcec | 2022-04-08T11:22:10.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Damith | null | Damith/AraELECTRA-discriminator-QuranQA | 9 | null | transformers | 12,501 | Entry not found |
Sleoruiz/roberta-base-bne-finetuned-cola | d057e78a45a4b52ddc3d5fb6fad5685c160c1504 | 2022-04-12T10:53:47.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Sleoruiz | null | Sleoruiz/roberta-base-bne-finetuned-cola | 9 | null | transformers | 12,502 | Entry not found |
shniranjan/wav2vec2-large-xlsr-1b-nepali | 7ffa019eefb4ffd5dab72e13262c9f665f64a205 | 2022-04-13T01:48:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ne",
"transformers",
"speech-to-text"
]
| automatic-speech-recognition | false | shniranjan | null | shniranjan/wav2vec2-large-xlsr-1b-nepali | 9 | null | transformers | 12,503 | ## Usage
The model can be used directly (without a language model) as follows:
---
language:
- ne
tags:
- speech-to-text
---
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("shniranjan/wav2vec2-large-xlsr-1b-nepali")
model = Wav2Vec2ForCTC.from_pretrained("shniranjan/wav2vec2-large-xlsr-1b-nepali")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
raquiba/finetuning-sentiment-model-3000-samples | 937cb7d4ec33e3d8ff11a77c8fef4328f6342418 | 2022-04-11T05:05:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | raquiba | null | raquiba/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,504 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8690095846645367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3242
- Accuracy: 0.8633
- F1: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gk07/wikineural-multilingual-ner | d8c2a681a08bad3594193e0588f2ef7da1f11e9c | 2022-04-11T14:20:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | gk07 | null | gk07/wikineural-multilingual-ner | 9 | null | transformers | 12,505 | Entry not found |
silpa/wikineural-multilingual-ner | 1d04244d73e9bce0ae6e76a777c614529cf05d9b | 2022-04-11T18:18:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | silpa | null | silpa/wikineural-multilingual-ner | 9 | null | transformers | 12,506 | Entry not found |
nielsr/segformer-test-v7 | 75a953953155242049b47af5983a1aa3b77f092c | 2022-04-11T18:04:13.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
]
| image-segmentation | false | nielsr | null | nielsr/segformer-test-v7 | 9 | null | transformers | 12,507 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
--- |
microsoft/reacc-py-retriever | de6f2dd28daea74432154cb14d78859e4bda6198 | 2022-04-12T08:03:10.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2203.07722",
"transformers",
"license:mit"
]
| feature-extraction | false | microsoft | null | microsoft/reacc-py-retriever | 9 | 2 | transformers | 12,508 | ---
license: mit
---
# ReACC-py-retriever
This is the retrieval model for [ReACC: A Retrieval-Augmented Code Completion Framework](https://arxiv.org/abs/2203.07722).
In this paper, the model is used to retrieve similar codes given an incompletion code snippet as query. The model can be also used for incomplete code-to-code search, code clone detection.
`py-retriever` is BERT-like encoder consisting of 12 transformer layers. It is continual pre-trained on [GraphCodeBERT](https://huggingface.co/microsoft/graphcodebert-base) with contrastive learning in Python programming language. More details can be found in our paper.
Note that the format of input codes is different from original source code. We normalize the source codes to better capture information from line break and indention in Python. An example of input is:
```python
sum = 0<endofline>for val in numbers:<endofline><INDENT>sum = sum+val
```
To get more information about how to convert source codes into this format, please refer to [ReACC GitHub repo](https://github.com/microsoft/ReACC). |
BigSalmon/GPT2Neo1.3BPoints3 | 69c3e44c0af16b5ea36cb5aa6cb9940f26574c52 | 2022-04-12T19:21:58.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/GPT2Neo1.3BPoints3 | 9 | null | transformers | 12,509 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPT2Neo1.3BPoints3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPT2Neo1.3BPoints3")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
lucaordronneau/twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS | 8a58a103a7098c8bce92b8e55081afc91c1e6d6c | 2022-05-03T11:29:22.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | lucaordronneau | null | lucaordronneau/twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS | 9 | null | transformers | 12,510 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-latest-finetuned-FG-SINGLE_SENTENCE-NEWS
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2822
- Accuracy: 0.6305
- F1: 0.6250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 321 | 0.9646 | 0.5624 | 0.4048 |
| 0.9537 | 2.0 | 642 | 0.9474 | 0.5644 | 0.4176 |
| 0.9537 | 3.0 | 963 | 0.9008 | 0.5903 | 0.5240 |
| 0.858 | 4.0 | 1284 | 0.9939 | 0.5999 | 0.5846 |
| 0.5908 | 5.0 | 1605 | 1.0947 | 0.6108 | 0.6026 |
| 0.5908 | 6.0 | 1926 | 1.2507 | 0.5740 | 0.5823 |
| 0.3676 | 7.0 | 2247 | 1.4717 | 0.6128 | 0.6017 |
| 0.2246 | 8.0 | 2568 | 1.6726 | 0.5965 | 0.6003 |
| 0.2246 | 9.0 | 2889 | 1.8041 | 0.6380 | 0.6298 |
| 0.1468 | 10.0 | 3210 | 1.9796 | 0.6053 | 0.6026 |
| 0.1161 | 11.0 | 3531 | 2.0988 | 0.6237 | 0.6202 |
| 0.1161 | 12.0 | 3852 | 2.4171 | 0.5944 | 0.5989 |
| 0.0916 | 13.0 | 4173 | 2.3326 | 0.6374 | 0.6288 |
| 0.0916 | 14.0 | 4494 | 2.5472 | 0.6360 | 0.6245 |
| 0.0661 | 15.0 | 4815 | 2.9127 | 0.6176 | 0.6187 |
| 0.0454 | 16.0 | 5136 | 2.9133 | 0.6326 | 0.6276 |
| 0.0454 | 17.0 | 5457 | 3.1299 | 0.6210 | 0.6162 |
| 0.0337 | 18.0 | 5778 | 3.1828 | 0.6224 | 0.6188 |
| 0.0223 | 19.0 | 6099 | 3.2655 | 0.6299 | 0.6223 |
| 0.0223 | 20.0 | 6420 | 3.2822 | 0.6305 | 0.6250 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
maretamasaeva/thesis-freeform-yesno | e19347b582c491a37563a72f6d24b41f8bfaf7e4 | 2022-04-22T12:14:20.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | maretamasaeva | null | maretamasaeva/thesis-freeform-yesno | 9 | null | transformers | 12,511 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: thesis-freeform-yesno
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-freeform-yesno
This model is a fine-tuned version of [maretamasaeva/thesis-freeform](https://huggingface.co/maretamasaeva/thesis-freeform) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4547
- Accuracy: 0.0194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.5001 | 1.0 | 9052 | 2.4600 | 0.0194 |
| 2.4921 | 2.0 | 18104 | 2.4595 | 0.0194 |
| 2.4879 | 3.0 | 27156 | 2.4576 | 0.0194 |
| 2.4793 | 4.0 | 36208 | 2.4547 | 0.0194 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Helsinki-NLP/opus-mt-tc-big-ces_slk-en | 45416949bc1778108950f304dff42159b5630737 | 2022-06-01T12:58:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"en",
"sk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-ces_slk-en | 9 | null | transformers | 12,512 | ---
language:
- cs
- en
- sk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-ces_slk-en
results:
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: flores101-devtest
type: flores_101
args: ces eng devtest
metrics:
- name: BLEU
type: bleu
value: 41.2
- task:
name: Translation slk-eng
type: translation
args: slk-eng
dataset:
name: flores101-devtest
type: flores_101
args: slk eng devtest
metrics:
- name: BLEU
type: bleu
value: 40.1
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 38.6
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 37.9
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: news-test2008
type: news-test2008
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 26.2
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 57.7
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2009
type: wmt-2009-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 28.8
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2010
type: wmt-2010-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 30.3
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2011
type: wmt-2011-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 30.3
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2012
type: wmt-2012-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 29.4
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2013
type: wmt-2013-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 33.1
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2014
type: wmt-2014-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 38.3
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2015
type: wmt-2015-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 33.6
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2016
type: wmt-2016-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 36.8
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2017
type: wmt-2017-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 32.3
- task:
name: Translation ces-eng
type: translation
args: ces-eng
dataset:
name: newstest2018
type: wmt-2018-news
args: ces-eng
metrics:
- name: BLEU
type: bleu
value: 33.0
---
# opus-mt-tc-big-ces_slk-en
Neural machine translation model for translating from Czech and Slovak (ces+slk) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): ces
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces+slk-eng/opusTCv20210807+bt_transformer-big_2022-03-17.zip)
* more information released models: [OPUS-MT ces+slk-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces+slk-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Podívej se na své kalhoty! Zapni si je na zip.",
"Mrzí mě, že Tom odchází."
]
model_name = "pytorch-models/opus-mt-tc-big-ces_slk-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Look at your pants, zip them up.
# I'm sorry Tom's leaving.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-ces_slk-en")
print(pipe("Podívej se na své kalhoty! Zapni si je na zip."))
# expected output: Look at your pants, zip them up.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces+slk-eng/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces+slk-eng/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ces-eng | tatoeba-test-v2021-08-07 | 0.72120 | 57.7 | 13824 | 105010 |
| ces-eng | flores101-devtest | 0.66511 | 41.2 | 1012 | 24721 |
| slk-eng | flores101-devtest | 0.66084 | 40.1 | 1012 | 24721 |
| ces-eng | multi30k_test_2016_flickr | 0.62216 | 38.6 | 1000 | 12955 |
| ces-eng | multi30k_test_2018_flickr | 0.61838 | 37.9 | 1071 | 14689 |
| ces-eng | newssyscomb2009 | 0.56380 | 29.9 | 502 | 11818 |
| ces-eng | news-test2008 | 0.54071 | 26.2 | 2051 | 49380 |
| ces-eng | newstest2009 | 0.55871 | 28.8 | 2525 | 65399 |
| ces-eng | newstest2010 | 0.57634 | 30.3 | 2489 | 61711 |
| ces-eng | newstest2011 | 0.57002 | 30.3 | 3003 | 74681 |
| ces-eng | newstest2012 | 0.56564 | 29.4 | 3003 | 72812 |
| ces-eng | newstest2013 | 0.58723 | 33.1 | 3000 | 64505 |
| ces-eng | newstest2014 | 0.64192 | 38.3 | 3003 | 68065 |
| ces-eng | newstest2015 | 0.58688 | 33.6 | 2656 | 53569 |
| ces-eng | newstest2016 | 0.61544 | 36.8 | 2999 | 64670 |
| ces-eng | newstest2017 | 0.58085 | 32.3 | 3005 | 61721 |
| ces-eng | newstest2018 | 0.58627 | 33.0 | 2983 | 63495 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:42:25 EEST 2022
* port machine: LM0-400-22516.local
|
caotianyu1996/bert_finetuned_ner | f5d5d65fad803915a7580e4896dbd079b13a32b4 | 2022-04-14T06:51:31.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"generated_from_keras_callback",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | caotianyu1996 | null | caotianyu1996/bert_finetuned_ner | 9 | null | transformers | 12,513 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: caotianyu1996/bert_finetuned_ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# caotianyu1996/bert_finetuned_ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0247
- Validation Loss: 0.0593
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1275 | 0.0574 | 0 |
| 0.0414 | 0.0569 | 1 |
| 0.0247 | 0.0593 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Raychanan/bert-base-chinese-first512 | a5f110b6b0c80e0518fc065d8eac42a0ae201bfe | 2022-04-15T02:50:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Raychanan | null | Raychanan/bert-base-chinese-first512 | 9 | null | transformers | 12,514 | first 512
training_args = TrainingArguments(
output_dir="./results",
learning_rate=5e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
evaluation_strategy="epoch",
push_to_hub=True
) |
MartinoMensio/racism-models-w-m-vote-strict-epoch-1 | 77a98c18c09c0a9e38c28bc3604f66acedfd7c1d | 2022-05-04T16:24:13.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-w-m-vote-strict-epoch-1 | 9 | null | transformers | 12,515 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-strict-epoch-1`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-strict-epoch-1'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.9342454075813293}, {'label': 'non-racist', 'score': 0.7690662741661072}]
```
For more details, see https://github.com/preyero/neatclass22
|
MartinoMensio/racism-models-w-m-vote-strict-epoch-4 | 01e5904ed76a49e34ef6f432a4ef46bc1e430f28 | 2022-05-04T16:26:42.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-w-m-vote-strict-epoch-4 | 9 | null | transformers | 12,516 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-strict-epoch-4`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-strict-epoch-4'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.9834708571434021}, {'label': 'non-racist', 'score': 0.995682954788208}]
```
For more details, see https://github.com/preyero/neatclass22
|
jason9693/koelectra-base-v3-discriminator-apeach | 890a73095ea8edf183f211239e30ef5b8928db18 | 2022-04-16T14:21:01.000Z | [
"pytorch",
"electra",
"text-classification",
"ko",
"dataset:jason9693/APEACH",
"transformers"
]
| text-classification | false | jason9693 | null | jason9693/koelectra-base-v3-discriminator-apeach | 9 | null | transformers | 12,517 | ---
language: ko
widget:
- text: "응 어쩔티비~~"
datasets:
- jason9693/APEACH
--- |
rmihaylov/bert-base-bg | 985c9aa785a4573f7d7915f49c0a3a6ff911a110 | 2022-04-17T05:15:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:1810.04805",
"arxiv:1905.07213",
"transformers",
"torch",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | rmihaylov | null | rmihaylov/bert-base-bg | 9 | null | transformers | 12,518 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# BERT BASE (cased)
Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
between bulgarian and Bulgarian.
## Model description
The model was trained similarly to [RuBert](https://arxiv.org/pdf/1905.07213.pdf) wherein the Multilingual Bert was adapted for the Russian language.
The training data was Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import pipeline
>>>
>>> model = pipeline(
>>> 'fill-mask',
>>> model='rmihaylov/bert-base-bg',
>>> tokenizer='rmihaylov/bert-base-bg',
>>> device=0,
>>> revision=None)
>>> output = model("София е [MASK] на България.")
>>> print(output)
[{'score': 0.12665307521820068,
'sequence': 'София е столица на България.',
'token': 2659,
'token_str': 'столица'},
{'score': 0.07470757514238358,
'sequence': 'София е Перлата на България.',
'token': 102146,
'token_str': 'Перлата'},
{'score': 0.06786204129457474,
'sequence': 'София е Столицата на България.',
'token': 45495,
'token_str': 'Столицата'},
{'score': 0.05533991754055023,
'sequence': 'София е Столица на България.',
'token': 100524,
'token_str': 'Столица'},
{'score': 0.05485989898443222,
'sequence': 'София е столицата на България.',
'token': 2294,
'token_str': 'столицата'}]
``` |
ttwj-sutd/finetuning-sentiment-model-3000-samples-5pm | b0a0445d7f4a499d0a20d456935760403bc3a9a7 | 2022-04-17T09:11:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ttwj-sutd | null | ttwj-sutd/finetuning-sentiment-model-3000-samples-5pm | 9 | null | transformers | 12,519 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples-5pm
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-5pm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4325
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 188 | 0.3858 | 0.84 |
| No log | 2.0 | 376 | 0.3146 | 0.8833 |
| 0.2573 | 3.0 | 564 | 0.3938 | 0.8833 |
| 0.2573 | 4.0 | 752 | 0.4325 | 0.88 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
varinner/jaredbotmark1point5 | 5b6bdfd9db53ea8d6955d3e658fd7fd0a7887335 | 2022-04-17T11:13:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:basement",
"transformers",
"conversational"
]
| conversational | false | varinner | null | varinner/jaredbotmark1point5 | 9 | null | transformers | 12,520 | dont worry about it
---
language:
- en
tags:
- conversational
datasets:
- basement
--- |
Sarmila/layoutlmv2-finetuned-funsd-test | e3d27fb0c1243482765059f17d7b094882fb3c12 | 2022-04-19T09:26:04.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"transformers"
]
| null | false | Sarmila | null | Sarmila/layoutlmv2-finetuned-funsd-test | 9 | null | transformers | 12,521 | Entry not found |
tuhailong/cross_encoder_electra-180g-large-discriminator | 94093faaf7d589b223bf67d43e0bf6ec00cc6594 | 2022-04-20T02:40:23.000Z | [
"pytorch",
"electra",
"text-classification",
"zh",
"dataset:dialogue",
"transformers",
"cross-encoder"
]
| text-classification | false | tuhailong | null | tuhailong/cross_encoder_electra-180g-large-discriminator | 9 | null | transformers | 12,522 | ---
language: zh
tags:
- cross-encoder
datasets:
- dialogue
---
# Data
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.
## Model
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder,pretrained model is hfl/chinese-electra-180g-large-discriminator.
### Usage
```python
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])
```
#### Code
train code from https://github.com/TTurn/cross-encoder |
domenicrosati/t5-finetuned-parasci | 2370d10ec37aa30704c349dfaf21e4f71b765d87 | 2022-04-24T14:51:43.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | domenicrosati | null | domenicrosati/t5-finetuned-parasci | 9 | 1 | transformers | 12,523 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-finetuned-parasci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetuned-parasci
This model is a fine-tuned version of [domenicrosati/t5-finetuned-parasci](https://huggingface.co/domenicrosati/t5-finetuned-parasci) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0845
- Bleu: 19.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cammy/pegasus-cnn_dailymail-100-lit-evalMA-new | ab7cfe4095d0f853d82af3f07443cc9f1adbb4d6 | 2022-04-21T02:57:43.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | cammy | null | cammy/pegasus-cnn_dailymail-100-lit-evalMA-new | 9 | null | transformers | 12,524 | Entry not found |
Shivierra/DialoGPT-small-technoblade | c823e78b187ac8dfd8efb60bba4d72627fb4bcab | 2022-04-21T05:08:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Shivierra | null | Shivierra/DialoGPT-small-technoblade | 9 | null | transformers | 12,525 | ---
tags:
- conversational
---
# Technoblade DialoGPT Model |
SCAI-BIO/bio-gottbert-base | b9805330174419a55ac147f79e8bb312195e4ce7 | 2022-04-21T07:58:37.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | SCAI-BIO | null | SCAI-BIO/bio-gottbert-base | 9 | null | transformers | 12,526 | Entry not found |
rdchambers/distilbert-base-uncased-finetuned-emotion | 4bfec69ac9efab40bf53417fd507a4bbbc695bc9 | 2022-04-21T18:00:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | rdchambers | null | rdchambers/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,527 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9221171029763118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2238
- Accuracy: 0.922
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.829 | 1.0 | 250 | 0.3173 | 0.9005 | 0.8980 |
| 0.247 | 2.0 | 500 | 0.2238 | 0.922 | 0.9221 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Truefilter/btwt_identity_subclasses | 2e0587b2aaba9a0778b48131dda1a384aff85e58 | 2022-04-22T13:25:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Truefilter | null | Truefilter/btwt_identity_subclasses | 9 | null | transformers | 12,528 | Entry not found |
naomiyjchen/distilbert-base-uncased-finetuned-emotion | 874c906484b35d886d22ea5147252f2ed3fb9ea7 | 2022-04-24T04:43:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | naomiyjchen | null | naomiyjchen/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,529 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217262923032896
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.9215
- F1: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3167 | 0.8995 | 0.8960 |
| 0.2493 | 2.0 | 500 | 0.2208 | 0.9215 | 0.9217 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Manauu17/enhanced_roberta_sentiments_es | b797d000f45071ba7d45890170db64c2f82e3637 | 2022-04-24T22:30:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Manauu17 | null | Manauu17/enhanced_roberta_sentiments_es | 9 | 1 | transformers | 12,530 | # roberta_sentiments_es_en , A Sentiment Analysis model for Spanish sentences
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis. This model currently supports Spanish sentences
This is a enhanced version of 'Manauu17/roberta_sentiments_es' following the BERT's SOAT to acquire best results. The last 4 hidden layers were concatenated folowing dense layers to get classification results.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
import pandas as pd
from scipy.special import softmax
MODEL = 'Manauu17/enhanced_roberta_sentiments_es'
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PyTorch
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = ['@usuario siempre es bueno la opinión de un playo',
'Bendito año el que me espera']
encoded_input = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
output = model(**encoded_input)
scores = output[0].detach().numpy()
labels_dict = model.config.id2label
# Results
def get_scores(model_output, labels_dict):
scores = softmax(model_output)
frame = pd.DataFrame(scores, columns=model.config.id2label.values())
frame.style.highlight_max(axis=1,color="green")
return frame
# PyTorch
get_scores(scores, labels_dict).style.highlight_max(axis=1, color="green")
```
Output:
```
# PyTorch
get_scores(scores, labels_dict).style.highlight_max(axis=1, color="green")
Negative Neutral Positive
0 0.000607 0.004851 0.906596
1 0.079812 0.006650 0.001484
```
|
Danni/distilbert-base-uncased-finetuned-dbpedia | d790e018913a2d5d1c1360123c8a30a589387d84 | 2022-04-26T02:04:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Danni | null | Danni/distilbert-base-uncased-finetuned-dbpedia | 9 | null | transformers | 12,531 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-dbpedia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-dbpedia
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4338
- eval_matthews_correlation: 0.7817
- eval_runtime: 1094.9103
- eval_samples_per_second: 60.777
- eval_steps_per_second: 3.799
- epoch: 1.0
- step: 23568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Hate-speech-CNERG/hindi-codemixed-abusive-MuRIL | 9a8cab5d2fb4fc7241e9f2bec5e71a0e4872b1d4 | 2022-05-03T06:03:59.000Z | [
"pytorch",
"bert",
"text-classification",
"hi-en",
"arxiv:2204.12543",
"transformers",
"license:afl-3.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/hindi-codemixed-abusive-MuRIL | 9 | null | transformers | 12,532 | ---
language: hi-en
license: afl-3.0
---
This model is used detecting **abusive speech** in **Code-Mixed Hindi**. It is finetuned on MuRIL model using code-mixed hindi abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ |
xfbai/AMRBART-large-finetuned-AMR3.0-AMR2Text | 17bc8ea418cfb45b2642305d0ea3428e161af448 | 2022-04-26T05:57:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2203.07836",
"transformers",
"AMRBART",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | xfbai | null | xfbai/AMRBART-large-finetuned-AMR3.0-AMR2Text | 9 | null | transformers | 12,533 | ---
language: en
tags:
- AMRBART
license: mit
---
## AMRBART-large-finetuned-AMR3.0-AMR2Text
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR3.0 dataset. It achieves a sacre-bleu score of 45.0 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022.
## Model description
Same with AMRBART.
## Training data
The model is finetuned on [AMR2.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 55,635
training instances, 1,722 validation instances, and 1,898 test instances.
## Intended uses & limitations
You can use the model for AMR-to-text generation, but it's mostly intended to be used in the domain of News.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMR2Text")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` |
Cheatham/xlm-roberta-large-finetuned-dA-002 | ff36b4822c6333778ca9c155349a0721b49264b0 | 2022-04-25T13:03:18.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-dA-002 | 9 | null | transformers | 12,534 | Entry not found |
Rodion/sbert_uno_sustainable_development_goals | a685ad29ba12967407f6aaa5ddc8b85015882e58 | 2022-05-01T14:33:23.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | Rodion | null | Rodion/sbert_uno_sustainable_development_goals | 9 | null | sentence-transformers | 12,535 |
The SBERT model was trained on the dataset of UNO sustainable development goals. The total dataset size is 20000 records. 16000 were used for training and 4000 for evaluation.
The similarity between records was calculated based on the class similarity:
0 (case 1 - no common classes)
(number of common classes)/(number of all classes) (case 2)
(number of common classes)/(maximal number of record classes)+(number of common classes)/(number of all classes) (case 3)
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 219 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 5,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Sie-BERT/glue_sst_classifier | ccc3e9df2bc3c2395f0e4e350a8c8e83ddae6fe6 | 2022-04-26T11:38:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Sie-BERT | null | Sie-BERT/glue_sst_classifier | 9 | null | transformers | 12,536 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
manueltonneau/bert-twitter-en-job-offer | 685e09fd8e41a73739f8b5a4c467be88ec9179a7 | 2022-04-26T15:54:55.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2203.09178",
"transformers"
]
| text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-en-job-offer | 9 | null | transformers | 12,537 | ---
language: en # <-- my language
widget:
- text: "Software Engineer job at Amazon in Seattle, WA"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Offer (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets containing a job offer. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets containing a job offer (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
luccazen/finetuning-sentiment-model-3000-samples | ed7b36ea5aaf7d36a6be21d2cb5692a68b840cd6 | 2022-04-27T01:58:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | luccazen | null | luccazen/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,538 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3026
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fgravelaine/ner-test | a116eaa7f8704b1b7cefaf6bec886579b1998ae2 | 2022-04-27T19:50:39.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | fgravelaine | null | fgravelaine/ner-test | 9 | null | transformers | 12,539 | Entry not found |
talaugust/bart-sci-definition | 54297b31ebbe7fb1a312399f2b0e38c2a966697c | 2022-04-28T19:24:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | talaugust | null | talaugust/bart-sci-definition | 9 | null | transformers | 12,540 | ## BART Scientific Definition Generation
This is a finetuned BART Large model from the paper:
"Generating Scientific Definitions with Controllable Complexity"
By Tal August, Katharina Reinecke, and Noah A. Smith
Abstract: Unfamiliar terminology and complex language can present barriers to understanding science. Natural language processing stands to help address these issues by automatically defining unfamiliar terms. We introduce a new task and dataset for defining scientific terms and controlling the complexity of gen- erated definitions as a way of adapting to a specific reader’s background knowledge. We test four definition generation methods for this new task, finding that a sequence-to-sequence approach is most successful. We then explore the version of the task in which definitions are generated at a target complexity level. We in- troduce a novel reranking approach and find in human evaluations that it offers superior fluency while also controlling complexity, compared to several controllable generation baselines.
## Description
The model is finetuned on the task of generating definitions of scientific terms. We frame our task as generating an answer to the question “What is (are) X?” Along with the question, the model takes a support document of scientific abstracted related to the term being defined.
## Intended use
The intended use of this model is to generate definitions of scientific terms. It is NOT intended for public deployment due to the risk of hallucinated information in model output. Strong supervision of definition factuality is important for any future deployment of such a system. While hallucinated information can be damaging in any generation context, incorrect scientific definitions could mislead readers and potentially contribute to broader scientific misinformation. The model is trained on data we believe is trustworthy (e.g., questions and answers from NIH websites); however, this is no guarantee that model output will be.
## Training data
The model is trained on data from two sources: Wikipedia science glossaries and a portion of the [MedQuAD dataset](https://github.com/abachaa/MedQuAD), which contains healthcare consumer questions and answers from NIH websites. For more information on these data sources, see the [github repository](https://github.com/talaugust/definition-complexity) for the paper.
## How to use
Note that this model was trained and evaluated using transformers version 4.2.2
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
AutoConfig,
)
bart_sci_def_tokenizer = AutoTokenizer.from_pretrained("talaugust/bart-sci-definition")
bart_sci_def_model = AutoModelForSeq2SeqLM.from_pretrained("talaugust/bart-sci-definition")
inputs = bart_sci_def_tokenizer("question: What is (are) surfactants? context: <P> .... <P> ...." , return_tensors='pt')
outputs = bart_sci_def_model.generate(**inputs,
decoder_start_token_id=tokenizer.bos_token_id,
num_return_sequences=1,
num_beams=5,
max_length=64,
min_length=8,
early_stopping=True,
temperature=None,
do_sample=True,
top_k=50,
top_p=0.9,
max_input_length=1024,
no_repeat_ngram_size=3,
device=None)
answers = [bart_sci_def_tokenizer.decode(ans_ids, skip_special_tokens=True).strip() for ans_ids in outputs[0]]
## Biases & Limitations
The goal of this model is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information. The texts in the datasets we train our models on are in General or Academic American. English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve. An important and exciting direction in NLP is making models more flexible to dialects and low-resource languages.
|
fractalego/samsumbot | 81e2151776c24746f3da4527b4b3797eca90478f | 2022-06-05T13:26:08.000Z | [
"pytorch",
"gptj",
"text-generation",
"arxiv:2106.09685",
"transformers"
]
| text-generation | false | fractalego | null | fractalego/samsumbot | 9 | null | transformers | 12,541 | # What is SamSum Bot?
This is a model fine-tuned on the [SamSum dataset](https://huggingface.co/datasets/samsum).
However, instead of training the system to summarize conversations, the model is trained to predict a conversation given a summary.
The prompt needs to be in the following form
```python
A partial summary of the conversation is:
{summary}
With the dialogue being:
{dialogue}
```
where *{summary}* is a text as in
```python
John went out to buy groceries. He meets Jane on the way and they talk about the weather.
```
and the *{dialogue}* needs to be structured with speaking lines preceded by the speaking character
```python
John: Oh hi Jane.
Jane: Nice to see you?
John: The weather looks nice today
Jane: [PREDICTION]
```
The system is based on the GPTJ-6B by EleutherAI, [quantized by Hivemind](https://huggingface.co/hivemind/gpt-j-6B-8bit). It has been fine-tuned according to the [LoRa method](https://arxiv.org/abs/2106.09685).
A simple back-end is available in [this repo](https://github.com/fractalego/samsum-bot), where the model is served using Torchserve.
A terminal-like front-end interface is available [here](https://github.com/fractalego/samsumbot_client).
This interface is the one used in my website [http://fractalego.io](http://fractalego.io).
|
pszemraj/mGPT-Peter-mwe | 096e796b71fd5da55cc550a01e074d776141e1a3 | 2022-04-29T11:44:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:mc4",
"dataset:Wikipedia",
"transformers",
"multilingual",
"PyTorch",
"Transformers",
"gpt3",
"Deepspeed",
"Megatron",
"license:apache-2.0"
]
| text-generation | false | pszemraj | null | pszemraj/mGPT-Peter-mwe | 9 | null | transformers | 12,542 | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- multilingual
- PyTorch
- Transformers
- gpt3
- gpt2
- Deepspeed
- Megatron
datasets:
- mc4
- Wikipedia
widget:
- text: "I know you're tired, but can we go for another walk this evening?\npeter szemraj:\n\n"
example_title: "walk"
- text: "What do you call an alligator who's just had surgery to remove his left arm?\npeter szemraj:\n\n"
example_title: "alligator"
- text: "If you could live anywhere, where would it be?\npeter szemraj:\n\n"
example_title: "dream living place"
- text: "What really makes you angry?\npeter szemraj:\n\n"
example_title: "pet peeve"
- text: "My friend says that she knows every language, but she doesn't speak any of them.. what's wrong with her?\npeter szemraj:\n\n"
example_title: "language"
- text: "What would you change about yourself if you could?\npeter szemraj:\n\n"
example_title: "change"
- text: "My first is in Asia, my second is in Europe, my third is in North America, and my fourth is in South America. What am I?\npeter szemraj:\n\n"
example_title: "continent"
- text: "Can you take me for dinner somewhere nice this time?\npeter szemraj:\n\n"
example_title: "dinner"
- text: "Honey, I have clogged the toilet for the third time this month.. sorry..\npeter szemraj:\n\n"
example_title: "overflow"
- text: "A man pushes his car to a hotel and tells the owner he's bankrupt. Why?\npeter szemraj:\n\n"
example_title: "brain teaser"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.4
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.95
top_k: 30
temperature: 0.65
repetition_penalty: 3.5
---
# mGPT: fine-tune on message data MWE
This model is a fine-tuned version of [sberbank-ai/mGPT](https://huggingface.co/sberbank-ai/mGPT) on 80k messages. Trained for one epoch, will be updated in a (separate) model repo later.
## Model description
- testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly
### Usage in python
Install the transformers library if you don't have it:
```
pip install -U transformers
```
load the model into a pipeline object:
```
from transformers import pipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_chatbot = pipeline('text-generation',
'pszemraj/mGPT-Peter-mwe',
device=0 if device == 'cuda' else -1,
)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pkushiqiang/bert-split-title-org | 18670d83c9f78505010ce0fa2c8a47a8e23de316 | 2022-04-29T04:43:51.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | pkushiqiang | null | pkushiqiang/bert-split-title-org | 9 | 1 | transformers | 12,543 | Entry not found |
Sindhu/emo_roberta | a0e23e9fdc0cc3311526b7ce640ff29ba566c2aa | 2022-04-29T15:20:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Sindhu | null | Sindhu/emo_roberta | 9 | null | transformers | 12,544 | Pytorch Port of [EmoRoberta model](https://huggingface.co/arpanghoshal/EmoRoBERTa). |
cfilt/HiNER-collapsed-xlm-roberta-large | 1df6591c6b4c71eb34c251fb48a7bb643f2b86de | 2022-05-01T19:47:49.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:cfilt/HiNER-collapsed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | cfilt | null | cfilt/HiNER-collapsed-xlm-roberta-large | 9 | null | transformers | 12,545 | ---
tags:
- generated_from_trainer
datasets:
- cfilt/HiNER-collapsed
metrics:
- precision
- recall
- f1
model-index:
- name: HiNER-collapsed-xlm-roberta-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
type: cfilt/HiNER-collapsed
name: HiNER Collapsed
metrics:
- name: Precision
type: precision
value: 0.9137448834064936
- name: Recall
type: recall
value: 0.9296549644788663
- name: F1
type: f1
value: 0.9216312652954473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER-collapsed-xlm-roberta-base
This model was trained from scratch on the cfilt/HiNER-collapsed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.14.0
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
rjuez00/meddocan-beto-ner | cf3f81d5a6f3cdf13240899d72565ad08b8b8697 | 2022-05-01T16:23:58.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | rjuez00 | null | rjuez00/meddocan-beto-ner | 9 | null | transformers | 12,546 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto_full_train_3_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto_full_train_3_epochs
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0445
- Precision: 0.9541
- Recall: 0.9481
- F1: 0.9511
- Accuracy: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
Parsa/Chemical_explosion_classification | adc7007de931756d430732d4cff0427c3c232ca6 | 2022-06-01T22:05:37.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Parsa | null | Parsa/Chemical_explosion_classification | 9 | null | transformers | 12,547 | For testing it yourself, the easiest way is using the colab link below.
Github repo: https://github.com/mephisto121/Chemical_explosion_classification
[](https://colab.research.google.com/drive/1GQmh1g2bRdqgQCnM6b_iY-eAQCRfhMJP?usp=sharing) |
ptaszynski/japan-orientalization-pl | 10b0e4ed78c9b51b012dedc5cee6c1b0e84031df | 2022-05-06T06:51:13.000Z | [
"pytorch",
"bert",
"text-classification",
"pl",
"dataset:18th and 19th century articles mentioning Japan",
"transformers",
"license:cc-by-sa-4.0"
]
| text-classification | false | ptaszynski | null | ptaszynski/japan-orientalization-pl | 9 | 1 | transformers | 12,548 | ---
language: pl
license: cc-by-sa-4.0
datasets:
- 18th and 19th century articles mentioning Japan
---
# Model for detection of Orientalization of Japan in newspaper articles
This model was based on the original [HerBERT](https://huggingface.co/allegro/herbert-base-cased) Base.
The model was finetuned on a set of Polish press articles mentioning Japan from the years 1818-1939 to recognize if an article (or any input text) presents a genuine description of Japan, or whether it is an example of the orientalization of Japan. By orientalization we mean when a text represents Japan through the lenses of [Orientalism](https://en.wikipedia.org/wiki/Orientalism), or a viewpoint that presents a piece of Eastern culture, in this case - Japan - in a deformed, distorted, and idealized form. In the definition of Orientalism we follow the work by [Edward Said](https://en.wikipedia.org/wiki/Edward_Said), especially his book "[Orientalism](https://en.wikipedia.org/wiki/Orientalism_(book))".
## Licenses
The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite this model using the following citation.
```
@inproceedings{ptaszynski2022herbert-japan,
title={Finetuned HerBERT model for detecting orientalization of Japan in newspaper articles},
author={Michal Ptaszynski and Pawel Dybala and Zuzanna Barczyk},
booktitle={HuggingFace},
url={https://huggingface.co/ptaszynski/japan-topic-detection}
year={2022}
}
``` |
niklaspm/linkbert-base-finetuned-squad | f4e3fbf5fca84d21fc49db6137ffbbc4cccaf913 | 2022-05-03T07:50:32.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2203.15827",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | niklaspm | null | niklaspm/linkbert-base-finetuned-squad | 9 | null | transformers | 12,549 | ---
license: apache-2.0
---
**Exact Match** 83.19
**F1** 90.46
Checkout [linkbert-large-finetuned-squad](https://huggingface.co/niklaspm/linkbert-large-finetuned-squad) which achives F1:92.68 and EM:86.5
See [LinkBERT Paper](https://arxiv.org/abs/2203.15827) |
henry931007/mfma | ad7b949f2a0a200cc4fe773aef2b11f000190826 | 2022-05-31T08:23:15.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | henry931007 | null | henry931007/mfma | 9 | null | transformers | 12,550 | ## Pre-trained factual consistency checking model for abstractive summaries introduced in the following NAACL-22 paper.
from transformers import AutoModelforSequenceClassification
model = AutoModelforSequenceClassification("henry931007/mfma")
```
@inproceedings{lee2022mfma,
title={Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking},
author={Hwanhee Lee and Kang Min Yoo and Joonsuk Park and Hwaran Lee and Kyomin Jung},
year={2022},
month={july},
booktitle={Findings of the Association for Computational Linguistics: NAACL 2022},
}
``` |
GonzaloA/distilroberta-base-finetuned-fakeNews | 3094c38ff7754eab86b8991a4237abc735994d49 | 2022-07-04T17:56:04.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | GonzaloA | null | GonzaloA/distilroberta-base-finetuned-fakeNews | 9 | 1 | transformers | 12,551 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilroberta-base-finetuned-fakeNews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-fakeNews
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0355
- Accuracy: 0.9910
## Training and evaluation data
All of the process to train this model is available in this repository: https://github.com/G0nz4lo-4lvarez-H3rv4s/FakeNewsDetection
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0301 | 1.0 | 1523 | 0.0322 | 0.9868 |
| 0.0165 | 2.0 | 3046 | 0.0292 | 0.9892 |
| 0.0088 | 3.0 | 4569 | 0.0355 | 0.9910 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sangjeedondrub/tibetan-roberta-causal-base | 8de94327831c4594328bfd432f1f27deec63ba10 | 2022-05-05T02:17:14.000Z | [
"pytorch",
"roberta",
"text-generation",
"bo",
"transformers",
"tibetan",
"pretrained causal language model",
"license:mit"
]
| text-generation | false | sangjeedondrub | null | sangjeedondrub/tibetan-roberta-causal-base | 9 | 1 | transformers | 12,552 | ---
language:
- bo
tags:
- tibetan
- pretrained causal language model
- roberta
widget:
- text: "རིན་"
- text: "རྫོགས་པའི་"
- text: "ཆོས་ཀྱི་"
- text: "གངས་རིའི་"
- text: "བོད་ཀྱི་སྨན་"
license: "mit"
---
# A demo for generating text using `Tibetan Roberta Causal Language Model`
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name = 'sangjeedondrub/tibetan-roberta-causal-base'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text_gen_pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
init_text = 'རིན་'
outputs = text_gen_pipe(init_text,
do_sample=True,
max_new_tokens=200,
temperature=.9,
top_k=10,
top_p=0.92,
num_return_sequences=10,
truncate=True)
for idx, output in enumerate(outputs, start=1):
print(idx)
print(output['generated_text'])
```
# Output
```
1
རིན་ཆེན་མཚོས་བསྒྱུར།" མི་འགྱུར་རིན་ཆེན་མཚོས་བསྒྱུར་བའི 《 བོད་ཡིག་རིག་མཛོད་ཆེན་མོ 》 ལས། “ ལོ་རྒྱུས་ནི་ཤིན་ཏུ་རྒྱ་ཆེ་བའི་རིན་ཆེན་མཚོན་བྱེད་ཡིན། འདི་ནི་མི་འགྱུར་རིན་ཆེན་མཚོས་ར་གླིང་ཡིན་ད་ད་གྱི་མཚན་པར་གྱི་བོདེ་བོདེན་མ་དེ་ཡིནི་ལས་ར་ལས། ཁོད་ད་དུ་མིན་དེ། དུངས་ད་ལས། ” ཞེདེ། “ མ
2
རིན་ཐང་གཞལ་དུ་མེད་པའི་སེམས་ཀྱི་གཏིང་མཐའ་རུ། ལུས་པོའི་བདེ་ཐང་དང་འབྲེལ་བའི་སེམས་ཀྱི་བདེ་ཐང་ནི་ཁྱོད་དང་འབྲེལ་བ་དམ་པོ་ཡོད། གལ་ཏེ་སེམས་ཀྱི་བདེ་ཐང་ལ་གནོད་པ་མེད་ན་ཁྱོད་ཀྱིས་འཚོ་བ་དང་འབྲེལེགས་དེ་འབྲེལ་པ་ཁྱོརྒྱུ་མཚན་ཉེན་བ་མི་གླེང་དོང་འབྲེལོངས་པ་ལོསོངས་འབྲེལོསོང་འབྲེལ། འབྲེལ་མོགོགོང་པོ་ས། ཁྱོ
3
རིན་ཆེན་ནི་བོད་ལྗོངས་ལྷ་ས་གྲོང་ཁྱེར་ཆུ་ཤུར་རྫོང་ཆུ་ཤུར་གྲོང་ཚོའི་ཡིག་ཚགས་དང་གཱ་བཟོས་པའི་དུད་ཚང་དབུལ་པོ་ཡིན་པ་དང་གྲོང་མི་ཚོས་ལོ་ལྟར་ཁྱིམ་དུ་མི་ཁྲིད་པ་མ་ཟད། གླ་པ་ཁྱིམིགླ་གླ་བྱར་ཁོའོའོའི་གླ་ཕོར་ཡོའོའང་ས་ཡོས་ཡོའོས་ས་ས་ཡོས་ཡོས་ཁུར་ས་འོས་ཁྱིས་མིགེར་
4
རིན་ཆེན་ནོར་བུ་རིན་ཆེན་གྱི་གསུང་འབུམ་པར་སྐྲུན་དང་འབྲེལ་ཏེ་རྔ་བའི་པར་ཁང་ཆེན་མོ་བཞི་དང་རྔ་བའི་པར་ཁང་ཆེན་མོ་གཅིག། རྔ་བའི་པར་ཁང་ཆེན་མོ་དྲུག་བཅས་ཀྱི་པར་ཤིང་ལྡེབ་རེ་རེ་བཞིན་པར་ཁང་གྲངས་རྒྱལྔ་སྔརྒྱལྔ་བའི་ཕྱགྱགྱི་དཔར་དུ་བསྒྲིགྱང་དང་བཞི་དཔར་བཞེངས་གྲུབ་སྒྲིག་བཞི་ཕུང་ཕྱགྲགས་སྒྲིགུ་ཕེངས་སྒྲིག་སྒྲིགི་
5
རིན་པོ་ཆེ། དེས་ན་ང་ལ་ང་སྐྱེས་དུས་ཀྱང་ང་སྐྱེས་ཡོད་ཅེས་བཤད་འདོད། དེ་ལྟར་བྱས་ཚེ་ང་སྐྱེས་ཚེ་ལོངས་སྤྱོད་ཀྱི་རྒྱུར་འགྱུར་རྒྱུ་ནི་ང་རང་སྐྱེས་དུས་ཀྱི་ཉི་མ་དེ་ཡིན་ནམ། ངས་བསམས་ན། ངས་ན། ང་ན་ཀྱང་དོསྐྱིདེ་འདོངས་ནི་ང་དེ་ནི་རང་དེ་སྣང་འདི་ལོམི་སྐྱིད་སྐྱིདེ་བས་དང་སྐྱིདོམས་སྐྱིད་དང་ས་སྐྱིད་ཁོངས་
6
རིན་ཆེན་ནོར་བུའི་གསེང་དུ་བསྐྱིལ་བའི་བུ་མོ་ཞིག་ཡིན་པའི་ཆ་ནས། ད་ལོའི་ཟླ་བཅུ་བར་ཨ་མ་ཁོས་སྡེ་བའི་ཏང་ཡོན་རྒན་པ་ཞིག་ལ་ཁྱིམ་ཚང་རེར་སྒོར་སྟོང་གཅིག་ལྷག་རོགས་སྐྱོར་བྱས་ཏེ། “ ང་པ་ང་མེད། ང་ང་ང་ཚང་ང་ཚང་ལ་ཚང་ཚང་ང་ག ” གྲོང་དང་། རྫོང་ར། ” གྲོང་ས་ལ། གྲོང་བཅས་བ། “ དོལོས་བ་སོས་དང
7
རིན་ཆེན་ནོར་བུ་ཡི་བྱ་བ་སྒྲུབ་མཁན་ཚོས་རྒྱུན་དུ་མཐོང་ཐུབ་པ་ནི་སྙན་ངག་པ་རྣམས་ཀྱི་མིག་ལ་ལྟ་ཞིང་སྙན་ངག་པས་བྲིས་པའི་ཚིག་རྐང་རེ་རེར་བརྗོད་དོན་ནི་སྙན་ངག་ལ་སྤྱིར་བཏང་གི་ཚིག་རིག་ད་མོརྣམོརྩེ་བ་གྱི་ཚིགྱི་ས་ཤིང་ས་སྣ་ལྟ་ཞིགྱི་ས་དང་ད་ད་དང་། སྙན་ལ་རྩེ་ན་རྩེརྩེ་སྣ་སྙན་མ་རྩེ་
8
རིན་ཆེན་སྒྲོལ་མ། བོད་རིགས། བོད་རིགས། མཚོ་སྔོན་མི་རིགས་སློབ་ཆེན་བོད་རིག་པའི་སློབ་གླིང་གི 2012 ལོའི་ཞིབ་འཇུག་སློབ་མ། རྨ་ལྷོ་ཁུལ་མི་རིགས་དགེ་ཐོན་སློབ་གྲྭ་ཡི་སྤྱི་ལོ 2008 ལོའི་ཞིར་དངོའི་ཞིང་སྒོར་སྒོར་སྒོར་སྒོར་ས་སྟོདུག་དང་སྟོནུག་ཕྱི་ས་ག་ཕྱི་ས་སྟོད་ར་ག་ཕྱི་དུལི་ཞིས་ས་འབུལ་
9
རིན་ཆེན་རིན་ཆེན་རྒྱལ་དང་ལྷ་རྒྱལ་སོགས་ཀྱི་རིགས་རྒྱུད་སྣ་ཚོགས་པའི་རིག་རྫས་རིན་ཐང་བྲལ་བ་ཞིག་ཡིན་པར་མ་ཟད། རྒྱལ་ཁབ་རིམ་པའི་གཟུགས་འདས་ཤུལ་བཞག་རིག་གནས་རྒྱུད་འཛིན་པ་རྣམས་པར་ལ་པ་ཞིན་གསར་པ་ག་པའི་མ་གས་གསར་རུ་ཐོནུས་པ་བ་ར་ཞིགྱུང་ཞེའི་ར་ས་ཐུལ། ཀྲུང་པ་ས་ར་ས་ག་ག་ར་ག་
10
རིན་ཐང་ལྟ་བའི་སློབ་ཁྲིད་བྱེད་ཐབས་ལ་སློབ་ཁྲིད་ཀྱི་གོ་རིམ་ཁྲོད་དུ་རིག་གཞུང་གི་ཤེས་བྱ་དང་སློབ་ཁྲིད་ཀྱི་ནུས་པ་ལྡན་པ་མ་ཟད། ད་དུང་རིག་གནས་ཀྱི་ཁྱད་པར་ཡང་ལྡན། དེང་སྐབས་དགེ་ཚང་དགེ་ཚང་ཚང་དགེ་རྫོང་ཆེ་རིན་པ་ཆེརྫོང་དགེ་ཆུའི་ཡིན་ག་ས་པོ་དགོན་ཆོན་ཆོང་སེར་ས་སློབ་ཡུའི་དགོག་ས་
```
# About
This model is trained and released by Sangjee Dondrub [sangjeedondrub at live dot com], the mere purpose of conducting these experiments is to improve my familiarity with Transformers APIs. |
heranm/finetuning-sentiment-model-3000-samples | 8962664ddb80f0992a420b07a6079a28c2905461 | 2022-05-05T16:50:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | heranm | null | heranm/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,553 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8766233766233766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3131
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kSaluja/test-ner | ed3109f7a95127bb7cbf1b0d143b3e02717e8e76 | 2022-05-06T10:11:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kSaluja | null | kSaluja/test-ner | 9 | null | transformers | 12,554 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1014
- Precision: 0.9609
- Recall: 0.9574
- F1: 0.9591
- Accuracy: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 151 | 0.1848 | 0.9060 | 0.9184 | 0.9122 | 0.9490 |
| No log | 2.0 | 302 | 0.1137 | 0.9548 | 0.9529 | 0.9538 | 0.9705 |
| No log | 3.0 | 453 | 0.1014 | 0.9609 | 0.9574 | 0.9591 | 0.9732 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
thusken/nb-bert-base-ctr-regression | d1b506627500a23af91f8414b9f909765c7325e0 | 2022-05-10T13:41:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index"
]
| text-classification | false | thusken | null | thusken/nb-bert-base-ctr-regression | 9 | null | transformers | 12,555 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: nb-bert-base-ctr-regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-base-ctr-regression
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0073
- Mse: 0.0073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0106 | 1.0 | 1103 | 0.0069 | 0.0069 |
| 0.0073 | 2.0 | 2206 | 0.0072 | 0.0072 |
| 0.0058 | 3.0 | 3309 | 0.0063 | 0.0063 |
| 0.0038 | 4.0 | 4412 | 0.0073 | 0.0073 |
| 0.0025 | 5.0 | 5515 | 0.0064 | 0.0064 |
| 0.0019 | 6.0 | 6618 | 0.0065 | 0.0065 |
| 0.0014 | 7.0 | 7721 | 0.0066 | 0.0066 |
| 0.0011 | 8.0 | 8824 | 0.0067 | 0.0067 |
| 0.0008 | 9.0 | 9927 | 0.0066 | 0.0066 |
| 0.0007 | 10.0 | 11030 | 0.0066 | 0.0066 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
LiYuan/Amazon-Cross-Encoder-Classification | e7706e84fe6bdd8e26f53be2d705c84facfbb1ed | 2022-05-08T17:52:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
]
| text-classification | false | LiYuan | null | LiYuan/Amazon-Cross-Encoder-Classification | 9 | null | transformers | 12,556 | ---
license: afl-3.0
---
There are two types of Cross-Encoder models. One is the Cross-Encoder Regression model that we fine-tuned and mentioned in the previous section. Next, we have the Cross-Encoder Classification model. These two models are introduced in the same paper https://doi.org/10.48550/arxiv.1908.10084
Both models resolve the issue that the BERT model is too time-consuming and resource-consuming to train in pairwised sentences. These two model weights are initialized as the BERT and RoBERTa networks. We only need to fine-tune them, spending much less time to yield a comparable or even better sentence embedding. The below figure \ref{figure:5} shows the architecture of Cross-Encoder Classification.

Then we evaluated the model performance on the 2,000 held-out test set. We also got a test accuracy **46.05%** that is almost identical to the best validation accuracy, suggesting a good generalization model. |
Jeevesh8/bert_ft_qqp-8 | d1da2132679d2c2f83fb52feaddbd2c754f59e3b | 2022-05-09T09:50:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-8 | 9 | null | transformers | 12,557 | Entry not found |
Jeevesh8/bert_ft_qqp-15 | 3d6616214af890c9be426203516d6c219aa083fc | 2022-05-09T10:08:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-15 | 9 | null | transformers | 12,558 | Entry not found |
Jeevesh8/bert_ft_qqp-16 | 71e3e87ec376b36a538eb1d1e16f856e09a3f124 | 2022-05-09T10:11:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-16 | 9 | null | transformers | 12,559 | Entry not found |
Jeevesh8/bert_ft_qqp-21 | 9700bc3f23a78b3b141d49f6ab384b56cf64e668 | 2022-05-09T10:23:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-21 | 9 | null | transformers | 12,560 | Entry not found |
Jeevesh8/bert_ft_qqp-30 | a008c61ed1c0d984e18eb1b95fdfc558beb65c6d | 2022-05-09T10:46:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-30 | 9 | null | transformers | 12,561 | Entry not found |
Jeevesh8/bert_ft_qqp-32 | 014be90258a2c21ae7b17b3bb7058f88a89a98b2 | 2022-05-09T10:51:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-32 | 9 | null | transformers | 12,562 | Entry not found |
Jeevesh8/bert_ft_qqp-33 | 0d8dcefc08a646decbcafa4363b02fab3f65ad5a | 2022-05-09T10:54:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-33 | 9 | null | transformers | 12,563 | Entry not found |
Jeevesh8/bert_ft_qqp-43 | 31856e693a5b7e490b0c3d9d04605b24011b626e | 2022-05-09T11:20:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-43 | 9 | null | transformers | 12,564 | Entry not found |
Jeevesh8/bert_ft_qqp-51 | 09ea0c682d9c720bd6f99799ed6a450d69f6a780 | 2022-05-09T11:40:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-51 | 9 | null | transformers | 12,565 | Entry not found |
Jeevesh8/bert_ft_qqp-76 | e4925fe807d9d99b035e58a2d1e2ff6301672a71 | 2022-05-09T12:44:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-76 | 9 | null | transformers | 12,566 | Entry not found |
Jeevesh8/bert_ft_qqp-83 | d41773cecfd538ebb8526486ea957c0758a7dc76 | 2022-05-09T13:02:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-83 | 9 | null | transformers | 12,567 | Entry not found |
Jeevesh8/bert_ft_qqp-86 | 2636314e1f3335aad48f1128ce60dec2b9b9ae32 | 2022-05-09T13:10:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-86 | 9 | null | transformers | 12,568 | Entry not found |
Jeevesh8/bert_ft_qqp-97 | 8af5c74fef84d26f657dbc383cbff1b83a15a219 | 2022-05-09T13:38:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-97 | 9 | null | transformers | 12,569 | Entry not found |
princeton-nlp/CoFi-MRPC-s95 | 429d63c589fe51ad2edb9998ccff154b3001c7cb | 2022-05-09T15:24:40.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
]
| text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-MRPC-s95 | 9 | null | transformers | 12,570 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset MRPC. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model. |
fujiki/t5-3b-en2ja | 45f72fd1cd645cde6a8c42f96394564a959ba2fd | 2022-07-12T12:59:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:cc-by-sa-3.0",
"autotrain_compatible"
]
| text2text-generation | false | fujiki | null | fujiki/t5-3b-en2ja | 9 | null | transformers | 12,571 | # Tokenizer
- the tokenizer is imported from [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese)
# License
[CC-BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.ja)
---
license: cc-by-sa-3.0
---
|
usmanazhar/finetuning-sentiment-model-3000-samples | 6b4dc49f2dbe971799229c21eb4736d4da3701f1 | 2022-05-10T09:27:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | usmanazhar | null | usmanazhar/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,572 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8766233766233766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Udi-Aharon/distilbert-base-uncased-finetuned-ner | 05d0da03674d209404a9b267532a0afe66b7a0a8 | 2022-05-10T15:59:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Udi-Aharon | null | Udi-Aharon/distilbert-base-uncased-finetuned-ner | 9 | null | transformers | 12,573 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.924327912379688
- name: Recall
type: recall
value: 0.9346683074169371
- name: F1
type: f1
value: 0.9294693514295249
- name: Accuracy
type: accuracy
value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
- Precision: 0.9243
- Recall: 0.9347
- F1: 0.9295
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2396 | 1.0 | 878 | 0.0715 | 0.9135 | 0.9228 | 0.9181 | 0.9805 |
| 0.051 | 2.0 | 1756 | 0.0617 | 0.9192 | 0.9334 | 0.9263 | 0.9826 |
| 0.0295 | 3.0 | 2634 | 0.0615 | 0.9243 | 0.9347 | 0.9295 | 0.9837 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CleveGreen/JobClassifier_v3_gpt | 3ce6c6302339162e0c42d66027545b58e0ad8e4e | 2022-05-11T20:40:57.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | CleveGreen | null | CleveGreen/JobClassifier_v3_gpt | 9 | null | transformers | 12,574 | Entry not found |
Khyeyoon/MRC_roberta | 34d904282df98b0f07c9fa16e03b3a4094ce5a72 | 2022-05-12T03:22:47.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Khyeyoon | null | Khyeyoon/MRC_roberta | 9 | null | transformers | 12,575 | src/models/0511_hard_dataset |
sagerpascal/bert-finetuned-ner | c1e9c30d9b41980d49dbfb959aa086ed92797ae7 | 2022-05-12T07:11:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | sagerpascal | null | sagerpascal/bert-finetuned-ner | 9 | null | transformers | 12,576 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9349014411131357
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9423157191752232
- name: Accuracy
type: accuracy
value: 0.9858862659680933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0646
- Precision: 0.9349
- Recall: 0.9498
- F1: 0.9423
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0834 | 1.0 | 1756 | 0.0686 | 0.9140 | 0.9354 | 0.9246 | 0.9825 |
| 0.0421 | 2.0 | 3512 | 0.0596 | 0.9205 | 0.9472 | 0.9336 | 0.9849 |
| 0.0183 | 3.0 | 5268 | 0.0646 | 0.9349 | 0.9498 | 0.9423 | 0.9859 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
upsalite/distilbert-base-uncased-finetuned-emotion | 8944cf17db5c2a71bde4d5dd2758f9b04570cf7e | 2022-05-31T13:37:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | upsalite | null | upsalite/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,577 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9284995196221415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8128 | 1.0 | 250 | 0.3000 | 0.914 | 0.9109 |
| 0.2423 | 2.0 | 500 | 0.2083 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
gaganpathre/amgerindaf | 5750109a650474107308521553e86a193d2933b3 | 2022-05-13T10:06:53.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | gaganpathre | null | gaganpathre/amgerindaf | 9 | 1 | transformers | 12,578 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: amgerindaf
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8469750881195068
---
# amgerindaf
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### african

#### american

#### german

#### indian
 |
dreamerdeo/ground-en-roberta-large | 6acfd925405e2241178d33b22ec316b9a83c69d4 | 2022-05-16T05:37:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | dreamerdeo | null | dreamerdeo/ground-en-roberta-large | 9 | null | transformers | 12,579 | Entry not found |
kaushalkhator/bert-to-distilbert-NER | 39ba995c428bd6dd9e81b670499a661070fd46d5 | 2022-05-14T16:01:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kaushalkhator | null | kaushalkhator/bert-to-distilbert-NER | 9 | null | transformers | 12,580 | Entry not found |
mertyrgn/distilbert-base-uncased-finetuned-emotion | 2bd0f7c21b2fc7b8832841d6eb3be70f5771175c | 2022-05-15T13:58:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | mertyrgn | null | mertyrgn/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,581 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235106231638174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8272 | 1.0 | 250 | 0.2939 | 0.917 | 0.9153 |
| 0.2414 | 2.0 | 500 | 0.2064 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
kktoto/kt_punc | a15b8358ca8d3b706852f6d3efb9a5ef54c5ad91 | 2022-05-15T15:16:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:chn_senti_corp",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/kt_punc | 9 | null | transformers | 12,582 | ---
tags:
- generated_from_trainer
datasets:
- chn_senti_corp
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: kt_punc
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: chn_senti_corp
type: chn_senti_corp
args: default
metrics:
- name: Precision
type: precision
value: 0.7078651685393258
- name: Recall
type: recall
value: 0.7313662547821116
- name: F1
type: f1
value: 0.7194238380517767
- name: Accuracy
type: accuracy
value: 0.957316742326961
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kt_punc
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the chn_senti_corp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Precision: 0.7079
- Recall: 0.7314
- F1: 0.7194
- Accuracy: 0.9573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1661 | 1.0 | 600 | 0.1351 | 0.6566 | 0.6833 | 0.6697 | 0.9498 |
| 0.1246 | 2.0 | 1200 | 0.1330 | 0.6854 | 0.6665 | 0.6758 | 0.9521 |
| 0.1121 | 3.0 | 1800 | 0.1303 | 0.6885 | 0.6994 | 0.6939 | 0.9537 |
| 0.1008 | 4.0 | 2400 | 0.1359 | 0.6836 | 0.7248 | 0.7036 | 0.9543 |
| 0.0809 | 5.0 | 3000 | 0.1404 | 0.7035 | 0.7082 | 0.7059 | 0.9559 |
| 0.0696 | 6.0 | 3600 | 0.1449 | 0.6986 | 0.7224 | 0.7103 | 0.9560 |
| 0.0628 | 7.0 | 4200 | 0.1563 | 0.7063 | 0.7214 | 0.7138 | 0.9567 |
| 0.0561 | 8.0 | 4800 | 0.1618 | 0.7024 | 0.7333 | 0.7175 | 0.9568 |
| 0.0525 | 9.0 | 5400 | 0.1669 | 0.7083 | 0.7335 | 0.7207 | 0.9574 |
| 0.0453 | 10.0 | 6000 | 0.1703 | 0.7079 | 0.7314 | 0.7194 | 0.9573 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData | e42f0533d87516bb9ebe2af76459c58ed992ddf9 | 2022-05-16T01:21:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData | 9 | null | transformers | 12,583 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_preTrained_with_noisyData
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_preTrained_with_noisyData
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Precision: 0.9203
- Recall: 0.8703
- F1: 0.8946
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 245 | 0.0258 | 0.9172 | 0.7990 | 0.8540 | 0.9918 |
| No log | 2.0 | 490 | 0.0192 | 0.9203 | 0.8703 | 0.8946 | 0.9939 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huawei-noah/AutoTinyBERT-KD-S4 | 9c07af1b1cb3cc40ba3e550cc2d20e6fbcaef0db | 2022-05-16T15:14:43.000Z | [
"pytorch",
"transformers",
"license:other"
]
| null | false | huawei-noah | null | huawei-noah/AutoTinyBERT-KD-S4 | 9 | null | transformers | 12,584 | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
itzo/distilbert-base-uncased-fine-tuned-on-emotion-dataset | 44ce44b184b84b9b84b1455e721e73ff469c8652 | 2022-05-17T08:25:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | itzo | null | itzo/distilbert-base-uncased-fine-tuned-on-emotion-dataset | 9 | null | transformers | 12,585 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-fine-tuned-on-emotion-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-fine-tuned-on-emotion-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2138
- Accuracy Score: 0.9275
- F1 Score: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------:|
| 0.8024 | 1.0 | 250 | 0.3089 | 0.906 | 0.9021 |
| 0.2448 | 2.0 | 500 | 0.2138 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_42 | d264d748f5d2374a9a62a9e03d03b818505b9ed8 | 2022-05-17T23:41:41.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.inclusive.seed_42 | 9 | null | transformers | 12,586 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_66 | a8a176c1f6d644c479780d6e519701079fb37905 | 2022-05-17T23:58:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.inclusive.seed_66 | 9 | null | transformers | 12,587 | Entry not found |
CEBaB/roberta-base.CEBaB.absa.inclusive.seed_88 | a41a0fa98571e89e6121b64c6b9baf47546de6b8 | 2022-05-18T00:31:49.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.absa.inclusive.seed_88 | 9 | null | transformers | 12,588 | Entry not found |
CEBaB/lstm.CEBaB.absa.inclusive.seed_88 | cb8653f9e69c2621786fb4f08e7ffdbda9a098c9 | 2022-05-18T00:43:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.absa.inclusive.seed_88 | 9 | null | transformers | 12,589 | Entry not found |
wooglee/distilbert-imdb | b79dde4fd5510260c897a12556f4ab88877bb4a7 | 2022-05-19T03:32:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | wooglee | null | wooglee/distilbert-imdb | 9 | null | transformers | 12,590 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.1951 | 0.9240 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0a0+17540c5
- Datasets 2.2.1
- Tokenizers 0.12.1
|
austin/medberta_v2 | 6105b09d9f64db7e6d9eee265a4cecd33ffcef56 | 2022-05-25T13:39:50.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | austin | null | austin/medberta_v2 | 9 | null | transformers | 12,591 | ---
tags:
- generated_from_trainer
model-index:
- name: medberta_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medberta_v2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.9168 | 0.96 | 40000 | 0.8666 |
| 0.8392 | 1.91 | 80000 | 0.7871 |
| 0.7867 | 2.87 | 120000 | 0.7432 |
| 0.7418 | 3.83 | 160000 | 0.7111 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
ENM/scibert_scivocab_cased-new-finetuned-leukaemia | 05038e706e1a3cc029c66a97a339d94d32335fbb | 2022-05-25T16:50:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | ENM | null | ENM/scibert_scivocab_cased-new-finetuned-leukaemia | 9 | null | transformers | 12,592 | ---
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_cased-new-finetuned-leukaemia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_cased-new-finetuned-leukaemia
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 51 | 2.9065 |
| No log | 2.0 | 102 | 1.1973 |
| No log | 3.0 | 153 | 0.9055 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
connectivity/bert_ft_qqp-6 | 92330357bd05984438849534fe8a2d516d63088e | 2022-05-21T16:31:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-6 | 9 | null | transformers | 12,593 | Entry not found |
connectivity/bert_ft_qqp-10 | 3a948a5e1b4787f552da809baad476ccb5f00386 | 2022-05-21T16:31:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-10 | 9 | null | transformers | 12,594 | Entry not found |
ThePixOne/gptcb | 80566986ab584e825475d420c00e246ea3e0d85f | 2022-05-21T19:11:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ThePixOne | null | ThePixOne/gptcb | 9 | 1 | transformers | 12,595 | GPT2 345M trained on 30 years of CentraBank's speeches |
pritoms/opt-350m-opt-350m-pretrained | c0ce395d367d0a7a026a19a5d34e0e6c1b0dea28 | 2022-05-21T20:53:36.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers"
]
| text-generation | false | pritoms | null | pritoms/opt-350m-opt-350m-pretrained | 9 | null | transformers | 12,596 | Entry not found |
spital/gpt2-small-czech-cs | 8995bc6ac4b78abaee3f0c46a1e211fa576dac11 | 2022-05-25T15:11:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"cs",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0"
]
| text-generation | false | spital | null | spital/gpt2-small-czech-cs | 9 | null | transformers | 12,597 | ---
language: cs
widget:
- text: "Umělá inteligence pomůže lidstvu překonat budoucí"
example_title: "Umělá inteligence ..."
- text: "Současný pokrok v oblasti umělých neuronových sítí představuje"
example_title: "Současný pokrok ..."
- text: "Z hlediska obecné teorie relativity"
example_title: "Z hlediska ..."
- text: "Vědci objevili šokující nález - stádo jednorožců žijící v odlehlém, dosud neprobádaném údolí v Andách. Ještě větším překvapením pro vědce byla skutečnost, že jednorožci mluvili"
example_title: "Vědci objevili ..."
license: cc-by-sa-4.0
tags:
- text-generation
- transformers
- pytorch
- gpt2
datasets:
- wikipedia
---
# GPT2-small-czech-cs: a Language Model for Czech text generation (and more NLP tasks ...)
## Introduction
GPT2-small-czech-cs is a first experimental model for Czech language based on the GPT-2 small model.
It was trained on Czech Wikipedia using **Transfer Learning and Fine-tuning techniques** in about over a weekend on one GPU NVIDIA GTX 1080ti and with about 1GB of training data (cswiki). A training server with couple GPUs for experiments and one RTX 3080 ti was generously provided by [ONYX engineering, spol. s r.o.](http://www.onyx.cz/).
This experiment is a proof-of-concept that it is possible to get a state-of-the-art language model in any language with low resources.
It was fine-tuned from the [English pre-trained GPT-2 small](https://huggingface.co/gpt2) using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the [fastai2](https://dev.fast.ai/) Deep Learning framework. All the fine-tuning fastai v2 techniques were used. This work was inspired by [Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787), citation below.
It is now available on Hugging Face under [gpt2-small-czech-cs](https://huggingface.co/spital/gpt2-small-czech-cs). We release it under [CC BY SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) (i.e. allowing commercial use). For further information or requests, please post a Github issue at [Github - gpt2-small-czech-cs](https://github.com/spital/gpt2-small-czech-cs).
## Model description
*Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)*
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## How to use GPT2-small-czech-cs with HuggingFace (PyTorch)
### Load the model and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
pretrained = 'spital/gpt2-small-czech-cs' # a local directory or huggingface model name
tokenizer = GPT2Tokenizer.from_pretrained(pretrained)
model = GPT2LMHeadModel.from_pretrained(pretrained, pad_token_id=tokenizer.eos_token_id)
# Sequence length max is 1024
tokenizer.model_max_length = 1024
# disable dropout (or leave in train mode to finetune)
model.eval()
```
### Generate one word
```python
import torch
# input sequence
text = "Umělá inteligence pomůže lidstvu překonat budoucí"
inp_tokens = tokenizer(text, return_tensors="pt")
# model output
outputs = model(**inp_tokens, labels=inp_tokens["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
# results
print('input text:', text)
print('predicted text:', predicted_text)
# predicted text: problémy
```
### Generate few full sequences
```python
text = "Umělá inteligence pomůže lidstvu překonat budoucí"
encoded = tokenizer.encode(text, return_tensors='pt')
# torch.random.manual_seed(0) # if you need reproducibility
sample_outputs = model.generate(encoded, do_sample=True,
max_length=encoded.size()[1]+20,
no_repeat_ngram_size=2, top_p=0.95, top_k=50,
temperature=0.65, num_return_sequences=3)
for i, sample_output in enumerate(sample_outputs): print("{}: {}\n".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))
```
## Limitations and bias
The training data used for this model come from Czech Wikipedia dump. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don't support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Author
Czech GPT-2 small was trained and evaluated by [Jiri Spitalsky](https://www.linkedin.com/in/jiri-spitalsky-09400a2) thanks to the computing power of the GPUs and other hardware generously provided by [ONYX engineering, spol. s r.o.](http://www.onyx.cz/).
## Citation
My special thanks go to Pierre Guillou for his work **GPorTuguese-2 (Portuguese GPT-2 small): a Language Model for Portuguese text generation (and more NLP tasks...)**, my work would not be possible without it.
|
BaxterAI/finetuning-sentiment-model-3000-samples | ab22bd8bccef536ed64ca7db8e0c46f754e50d65 | 2022-05-24T01:22:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:amazon_polarity",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | BaxterAI | null | BaxterAI/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,598 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9240816326530612
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8170
- Accuracy: 0.9225
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
unlisboa/bart_qa_assistant | 255f622844fd8aafde30beaa8e54a6e42619dc5d | 2022-05-24T18:39:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5",
"dataset:stackexchange(pets, cooking, gardening, diy, crafts)",
"transformers",
"generative qa",
"autotrain_compatible"
]
| text2text-generation | false | unlisboa | null | unlisboa/bart_qa_assistant | 9 | 2 | transformers | 12,599 | ---
language: en
tags:
- generative qa
datasets:
- eli5
- stackexchange(pets, cooking, gardening, diy, crafts)
---
Work by [Frederico Vicente](https://huggingface.co/mrvicente) & [Diogo Tavares](https://huggingface.co/d-c-t). We finetuned BART Large for the task of generative question answering. It was trained on eli5, askScience and stackexchange using the following forums: pets, cooking, gardening, diy, crafts.
### Usage
```python
from transformers import (
BartForConditionalGeneration,
BartTokenizer
)
import torch
import json
def read_json_file_2_dict(filename, store_dir='.'):
with open(f'{store_dir}/{filename}', 'r', encoding='utf-8') as file:
return json.load(file)
def get_device():
# If there's a GPU available...
if torch.cuda.is_available():
device = torch.device("cuda")
n_gpus = torch.cuda.device_count()
first_gpu = torch.cuda.get_device_name(0)
print(f'There are {n_gpus} GPU(s) available.')
print(f'GPU gonna be used: {first_gpu}')
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
return device
model_name = 'unlisboa/bart_qa_assistant'
tokenizer = BartTokenizer.from_pretrained(model_name)
device = get_device()
model = BartForConditionalGeneration.from_pretrained(model_name).to(device)
model.eval()
model_input = tokenizer(question, truncation=True, padding=True, return_tensors="pt")
generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device),attention_mask=model_input["attention_mask"].to(device),
force_words_ids=None,
min_length=1,
max_length=100,
do_sample=True,
early_stopping=True,
num_beams=4,
temperature=1.0,
top_k=None,
top_p=None,
# eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=2,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True)
response = tokenizer.batch_decode(generated_answers_encoded['sequences'], skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(response)
```
Have fun! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.