modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cambridgeltl/sst_distilbert-base-uncased | fde3ca1b6ad8e5468c2f79396dc054c7c9133e6d | 2022-03-14T10:27:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | cambridgeltl | null | cambridgeltl/sst_distilbert-base-uncased | 15 | null | transformers | 9,600 | Entry not found |
RobertoMCA97/xlm-roberta-base-finetuned-panx-fr | e96d13944859e199d637d1bd4c5c0ab0e5fac36e | 2022-03-16T12:40:56.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | RobertoMCA97 | null | RobertoMCA97/xlm-roberta-base-finetuned-panx-fr | 15 | null | transformers | 9,601 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8354854938789199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2651
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 |
| 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 |
| 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Guen/guen_test_prompt_generation | f68fe0b1ddb9b2145491b2a1b4771e9b6459664f | 2022-03-16T22:33:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Guen | null | Guen/guen_test_prompt_generation | 15 | null | transformers | 9,602 | A small language generation head to generate text from a prompt.
Fine-tuned on the t5-base model with the aeslc dataset. |
IIC/beto-base-spanish-sqac | e74bc7eeae45fa19b5a6f37c438ebdad1eacb9a8 | 2022-04-02T15:10:05.000Z | [
"pytorch",
"bert",
"question-answering",
"es",
"dataset:PlanTL-GOB-ES/SQAC",
"arxiv:2107.07253",
"transformers",
"model-index",
"autotrain_compatible"
]
| question-answering | false | IIC | null | IIC/beto-base-spanish-sqac | 15 | 1 | transformers | 9,603 | ---
language:
- es
tags:
- question-answering # Example: audio
datasets:
- PlanTL-GOB-ES/SQAC
metrics:
- f1
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: beto-base-spanish_sqac
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: question-answering # Optional. Example: Speech Recognition
dataset:
type: SQAC # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: PlanTL-GOB-ES/SQAC # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: f1
value: 76.2
name: f1
---
This model was trained on the [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) dataset, provided by [BSC](https://www.bsc.es/). It is a question-answering dataset originally developed in Spanish. As for the model, it is a fine-tuned version of [BETO](https://github.com/dccuchile/beto), a spanish BERT developed by the Catholic University of Chile.
For training the model, we followed the recommendations of the own authors in [their paper](https://arxiv.org/abs/2107.07253), performing a full grid search over the hyperparameter space provided in the paper, and selected the best model based on eval\_loss.
You can use the model like this:
```python
from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch
tokenizer = RobertaTokenizer.from_pretrained("IIC/roberta-base-spanish-sqac")
model = RobertaForQuestionAnswering.from_pretrained("IIC/roberta-base-spanish-sqac")
question, text = "Quién es el padre de Luke Skywalker?", "En la famosa película, Darth Veider le dice a Luke Skywalker aquella frase que todos recordamos: yo soy tu padre."
inputs = tokenizer(question, text, return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
Graphcore/roberta-base-squad | 270a133d717f6135c9319146d241b1cbe1442518 | 2022-05-25T18:34:13.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"arxiv:1907.11692",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Graphcore | null | Graphcore/roberta-base-squad | 15 | null | transformers | 9,604 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Graphcore/roberta-base-squad
results: []
---
# Graphcore/roberta-base-squad
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [HuggingFace/roberta-base](https://huggingface.co/roberta-base) on the SQuAD dataset.
## Training and evaluation data
Trained and evaluated on the SQuAD dataset:
- [HuggingFace/squad ](https://huggingface.co/datasets/squad).
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/question-answering/run_qa.py \
--ipu_config_name Graphcore/roberta-base-ipu \
--model_name_or_path roberta-base \
--dataset_name squad \
--do_train \
--do_eval \
--num_train_epochs 2 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 2 \
--pod_type pod16 \
--learning_rate 6e-5 \
--max_seq_length 384 \
--doc_stride 128 \
--seed 1984 \
--lr_scheduler_type linear \
--loss_scaling 64 \
--weight_decay 0.01 \
--warmup_ratio 0.25 \
--logging_steps 1 \
--save_steps -1 \
--dataloader_num_workers 64 \
--output_dir squad_roberta_base \
--overwrite_output_dir \
--push_to_hub
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 1984
- distributed_type: IPU
- total_train_batch_size: 256
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 2.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
epoch = 2.0
train_loss = 1.2528
train_runtime = 0:02:14.50
train_samples = 88568
train_samples_per_second = 1316.952
train_steps_per_second = 5.13
***** eval metrics *****
epoch = 2.0
eval_exact_match = 85.2696
eval_f1 = 91.7455
eval_samples = 10790
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
agdsga/bert-base-chinese-finetuned-ner | f95e5267f291c474e9fcb4ec5f1684a008676246 | 2022-03-24T12:52:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | agdsga | null | agdsga/bert-base-chinese-finetuned-ner | 15 | null | transformers | 9,605 | Entry not found |
l3cube-pune/hing-gpt-devanagari | 9b164b142a274ec0608329ac134aa4b9f267bdf3 | 2022-06-26T15:11:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"hi",
"en",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"transformers",
"codemix",
"license:cc-by-4.0"
]
| text-generation | false | l3cube-pune | null | l3cube-pune/hing-gpt-devanagari | 15 | null | transformers | 9,606 | ---
license: cc-by-4.0
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingGPT-Devanagari
HingGPT-Devanagari is a Hindi-English code-mixed GPT model trained on Devanagari text. It is a GPT2 model trained on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@InProceedings{nayak-joshi:2022:WILDRE6,
author = {Nayak, Ravindra and Joshi, Raviraj},
title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7--12}
}
``` |
danhsf/distilbert-base-uncased-finetuned-emotion | 2f993e57a11f857b435572fab823c0f80ae0c82a | 2022-03-31T02:39:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | danhsf | null | danhsf/distilbert-base-uncased-finetuned-emotion | 15 | null | transformers | 9,607 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.926557813198531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9265
- F1: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8631 | 1.0 | 250 | 0.3221 | 0.904 | 0.9011 |
| 0.254 | 2.0 | 500 | 0.2201 | 0.9265 | 0.9266 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
KeithHorgan/TweetClimateAnalysis | e2ed0bc3ab4cd9462fc6e34ff7334a4638f537f5 | 2022-03-29T10:01:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:KeithHorgan98/autotrain-data-TweetClimateAnalysis",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | KeithHorgan | null | KeithHorgan/TweetClimateAnalysis | 15 | null | transformers | 9,608 | ---
tags: autotrain
language: unk
widget:
- text: "Climate Change is a hoax"
- text: "It is freezing, where is global warming"
datasets:
- KeithHorgan98/autotrain-data-TweetClimateAnalysis
co2_eq_emissions: 133.19491276284793
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 678720226
- CO2 Emissions (in grams): 133.19491276284793
## Validation Metrics
- Loss: 0.4864234924316406
- Accuracy: 0.865424430641822
- Macro F1: 0.7665472174344069
- Micro F1: 0.8654244306418221
- Weighted F1: 0.8586375445115083
- Macro Precision: 0.8281449061702826
- Micro Precision: 0.865424430641822
- Weighted Precision: 0.8619727477790186
- Macro Recall: 0.736576343905098
- Micro Recall: 0.865424430641822
- Weighted Recall: 0.865424430641822
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KeithHorgan98/autotrain-TweetClimateAnalysis-678720226
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
asafaya/hubert-xlarge-turkish | 81076860defe06bb56875f07cfcea720f2f1c320 | 2022-03-29T13:07:20.000Z | [
"pytorch",
"hubert",
"feature-extraction",
"transformers",
"license:cc-by-nc-4.0"
]
| feature-extraction | false | asafaya | null | asafaya/hubert-xlarge-turkish | 15 | null | transformers | 9,609 | ---
license: cc-by-nc-4.0
---
|
Suyogyart/nepali-16-newsgroups-classification | acf4e369beed85101a8785869fff6c0ad04fb2b2 | 2022-03-31T15:28:37.000Z | [
"pytorch",
"distilbert",
"text-classification",
"ne",
"transformers",
"multiclass-classification",
"newsgroup",
"nepali",
"license:apache-2.0"
]
| text-classification | false | Suyogyart | null | Suyogyart/nepali-16-newsgroups-classification | 15 | null | transformers | 9,610 | ---
license: apache-2.0
language: ne
tags:
- multiclass-classification
- newsgroup
- nepali
---
# Nepali 16 News Group Classification
This model is suitable for classifying news categories in Nepali language into 16 different groups. It is fine-tuned on a pretrained DistilBERT model with Sequence Classification head on 16 newsgroup dataset for Nepali language.
## Acknowledgements
### Pretrained DistilBERT model
This model is fine-tuned on a text classification problem using a [pretrained model](https://huggingface.co/Sakonii/distilbert-base-nepali) available at HuggingFace.
## Dataset
This dataset consists of News documents in Nepali language which are equally categorized into 16 different categories. It is primarily designed for the purpose of multiclass text classification tasks. Each category consists of 1000 different news articles scraped from online Nepali news portals namely Ekantipur, Nagarik, Gorkhapatra, Online Khabar and many more. In addition to the article text, it also contains news headings, source from where the news is taken from and brief summary of what news is about. However, summaries are only available for news from certain sources.
The dataset is Label Encoded, i.e. it consists of 'labels' column that denote the numerical representation of news categories.
## Model Fine-tuning
The model fine-tuning was carried out in Google Colab with Tesla T4 GPU environment using HuggingFace's Trainer API. The training took approx. 42 minutes and was trained for 4 epochs until it reached the validation accuracy threshold.
**Dataset Splits**
| Split | Size | No. of samples |
|------------|------|----------------|
| train | 0.7 | 11200 |
| validation | 0.15 | 2400 |
| test | 0.15 | 2400 |
**DistilBERT Tokenizer parameters**
```
padding = True
truncation = True
max_len = 512
```
**Model Trainer arguments (For Trainer API)**
```
epochs = 5
batch_size = 16
learning_rate = 5e-05
save_steps = 500
eval_steps = 500
```
## Training Results
| Step | Training Loss | Validation Loss | Accuracy | Balanced Accuracy | Precision | Recall | F1 |
|------|---------------|-----------------|----------|-------------------|-----------|----------|----------|
| 500 | 0.718600 | 0.407946 | 0.878750 | 0.878750 | 0.882715 | 0.878750 | 0.877678 |
| 1000 | 0.252300 | 0.372410 | 0.897083 | 0.897083 | 0.903329 | 0.897083 | 0.897369 |
| 1500 | 0.175000 | 0.323519 | 0.916250 | 0.916250 | 0.917955 | 0.916250 | 0.916297 |
| 2000 | 0.099400 | 0.339903 | 0.916667 | 0.916667 | 0.919054 | 0.916667 | 0.916141 |
| 2500 | 0.058900 | 0.354112 | 0.921250 | 0.921250 | 0.922036 | 0.921250 | 0.920899 |
| 3000 | 0.023300 | 0.360163 | 0.922500 | 0.922500 | 0.922767 | 0.922500 | 0.922219 |
**Validation Loss:** 0.3235
**Validation Accuracy:** 92.625%
## Testing Results
| category | precision | recall | f1-score | support |
|---------------|-----------|--------|----------|---------|
| Arts | 0.94 | 0.97 | 0.95 | 150 |
| Diaspora | 0.97 | 0.93 | 0.95 | 150 |
| Bank | 0.97 | 0.86 | 0.91 | 150 |
| Technology | 0.98 | 0.99 | 0.99 | 150 |
| Literature | 0.92 | 0.88 | 0.90 | 150 |
| Automobile | 0.93 | 0.97 | 0.95 | 150 |
| World | 0.90 | 0.93 | 0.92 | 150 |
| Market | 0.93 | 0.98 | 0.95 | 150 |
| Lifestyle | 0.99 | 0.96 | 0.97 | 150 |
| Sports | 0.90 | 0.86 | 0.88 | 150 |
| Health | 0.86 | 0.89 | 0.87 | 150 |
| Entertainment | 0.98 | 0.97 | 0.97 | 150 |
| Politics | 0.97 | 0.99 | 0.98 | 150 |
| Tourism | 0.82 | 0.96 | 0.88 | 150 |
| Crime | 0.97 | 0.96 | 0.97 | 150 |
| Education | 0.96 | 0.84 | 0.90 | 150 |
| | | | | |
| accuracy | | | 0.93 | 2400 |
| macro avg | 0.94 | 0.93 | 0.93 | 2400 |
| weighted avg | 0.94 | 0.93 | 0.93 | 2400 |
## Sample Predictions
### Sample Text (Sports)
```
काठमाडौँ — त्रिभुवन आर्मी क्लबले ६ स्वर्ण, २ रजत र ६ कांस्य पदक जित्दै प्रथम वीर गणेशमान सिंह राष्ट्रिय फेन्सिङ प्रतियोगितामा टिम च्याम्पियन ट्रफी जितेको छ ।
दोस्रो भएको एपीएफले ३ स्वर्ण, ५ रजत र ८ कांस्य जित्यो । वाग्मती प्रदेशले ३ स्वर्ण, ५ रजत र ३ कांस्य जित्दै तेस्रो स्थान हात पार्यो ।
वीर गणेशमान सिंह स्पोर्ट्स कमिटी र नेपाल फेन्सिङ संघको संयुक्त आयोजनामा भएको प्रतियोगिताको महिला फोइलतर्फ एपीएफकी मन्दिरा थापाले स्वर्ण जितिन् । उनले फाइनलमा चिरप्रतिद्वन्द्वी सेनाकी रमा सिंहलाई १५–१२ ले हराइन् । आर्मीकी मनीषा राई र वाग्मतीकी अञ्जु तामाङ तेस्रो भए ।
पुरुषको टिम फोइलतर्फ आर्मीले स्वर्ण जित्यो । आर्मीले वाग्मतीलाई ४५–२९ स्कोरले हरायो । गण्डकी र एपीएफले कांस्य जिते ।
टिम महिला सावरमा आर्मीले स्वर्ण जित्यो । फाइनलमा आर्मीले एपीएफलाई ४५–३६ स्कोरले हराएर स्वर्ण जितेको हो । वाग्मती र गण्डकीले कांस्य जिते ।
महिला टिम फोइलतर्फ एपीएफले वाग्मती प्रदेशलाई ४५–३६ अंकले हरायो । आर्मी र प्रदेश १ तेस्रो भए ।
पुरुष इपी टिमतर्फ आर्मीले एपीएफलाई ४५–४० अंकले पराजित गर्दै स्वर्ण हात जित्यो ।
```
**Predicted Outputs**
```
***** Running Prediction *****
Num examples = 1
Batch size = 8
Predicted Category: Sports
```
### Sample Text (RU-UKR issue)
```
रूसी आक्रमणका कारण शरणार्थी जीवन बिताउन बाध्य युक्रेनीलाई
छिमेकी देशहरुले खाने, बस्नेलगायतका आधारभूत आवश्यकता उपलब्ध गराइरहेका छन्
जेनेभा — युक्रेनमा रुसले आक्रमण सुरु गरेयता २० लाख सर्वसाधारणले देश छाडेका छन् । शरणार्थीसम्बन्धी संयुक्त राष्ट्रसंघीय निकायका अुनसार विस्थापितहरू पोल्यान्ड, हंगेरी, स्लोभाकिया, मोल्दोभा, रोमानिया पुगेका छन् ।
कम्तीमा १२ लाख ४० हजार जना छिमेकी देश पोल्यान्ड पुगेको जनाइएको छ ।
त्यसैगरी, १ लाख ९१ हजार जना हंगेरी पुगेका छन् । १ लाख ४१ हजार स्लोभाकिया, ८३ हजार मोल्दोभा र ८२ हजार रोमानिया पुगेका छन् ।
त्यस्तै, रुस जानेको संख्या ९९ हजार ३ सय पुगेको छ ।
```
**Predicted Outputs**
```
***** Running Prediction *****
Num examples = 1
Batch size = 8
Predicted Category: World
``` |
alina1997/en_de_translation | 5dc8f4117e6f6f13f6d46d528cf7ff286cb72d5a | 2022-04-03T09:54:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alina1997 | null | alina1997/en_de_translation | 15 | null | transformers | 9,611 | Entry not found |
hackathon-pln-es/readability-es-paragraphs | 87e56e2f678e8d2aaec9050dec9843a53c8fa168 | 2022-04-04T10:41:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"spanish",
"bertin",
"license:cc-by-4.0"
]
| text-classification | false | hackathon-pln-es | null | hackathon-pln-es/readability-es-paragraphs | 15 | null | transformers | 9,612 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- bertin
pipeline_tag: text-classification
widget:
- text: La cueva de Zaratustra en el Pretil de los Consejos. Rimeros de libros hacen escombro y cubren las paredes. Empapelan los cuatro vidrios de una puerta cuatro cromos espeluznantes de un novelón por entregas. En la cueva hacen tertulia el gato, el can, el loro y el librero. Zaratustra, abichado y giboso -la cara de tocino rancio y la bufanda de verde serpiente- promueve con su caracterización de fantoche, una aguda y dolorosa disonancia muy emotiva y muy moderna. Encogido en el roto pelote de su silla enana, con los pies entrapados y cepones en la tarima del brasero, guarda la tienda. Un ratón saca el hocico intrigante por un agujero.
---
# Readability ES Paragraphs for two classes
Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts.
## Description and performance
This version of the model was trained on a mix of datasets, using paragraph-level granularity when possible. The model performs binary classification among the following classes:
- Simple.
- Complex.
It achieves a F1 macro average score of 0.8891, measured on the validation set.
## Model variants
- [`readability-es-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-sentences). Two classes, sentence-based dataset.
- `readability-es-paragraphs` (this model). Two classes, paragraph-based dataset.
- [`readability-es-3class-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-3class-sentences). Three classes, sentence-based dataset.
- [`readability-es-3class-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-3class-paragraphs). Three classes, paragraph-based dataset.
## Datasets
- [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of:
* coh-metrix-esp corpus.
* Various text resources scraped from websites.
- Other non-public datasets: newsela-es, simplext.
## Training details
Please, refer to [this training run](https://wandb.ai/readability-es/readability-es/runs/2z8080pi/overview) for full details on hyperparameters and training regime.
## Biases and Limitations
- Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set.
- One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases.
- Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes.
- Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented.
- No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#bias-examples-spanish).
## Authors
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
emon1521/wav2vec2-try | 9802952bf93a9ee41995c4a31909c13752e2f2c3 | 2022-04-08T05:21:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | emon1521 | null | emon1521/wav2vec2-try | 15 | null | transformers | 9,613 | Entry not found |
lysandre/test-bert-sharded | b7b840edf060f398daed50ee4285d42ca935b44d | 2022-04-07T17:15:21.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | lysandre | null | lysandre/test-bert-sharded | 15 | null | transformers | 9,614 | Entry not found |
rycont/KoQuestionBART | 348afe0407e1a11ad537646775b26fb4d154dbfa | 2022-04-09T11:24:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ko",
"dataset:KorQuad 1.0",
"transformers",
"KoBART",
"BART",
"Korean",
"QG",
"Question",
"KorQuad",
"license:gpl",
"autotrain_compatible"
]
| text2text-generation | false | rycont | null | rycont/KoQuestionBART | 15 | 1 | transformers | 9,615 | ---
language:
- ko
tags:
- KoBART
- BART
- Korean
- QG
- Question
- KorQuad
license: gpl
datasets:
- KorQuad 1.0
widget:
- text: "키워드 추출: 5<unused1>1943년 10월 당시, 반응로 B는 초기 가동에서 250 MW의 전력을 생산하도록 설계되었다. 맨해튼 계획은 반응로에 A에서 F까지 일련번호를 부여하였다. 이 반응로들은 모두 한 장소에 지어졌다. 반응로의 건설에는 390 톤의 강철이 소요되었으며, 13,300 m에 달하는 5만개의 콘크리트 벽돌을 사용하여 높이 37m에 달하는 건물을 건축하였다. 반응로는 1944년 2월에 착공되었다. 1944년 9월 13일 캄프턴, 마티어스, 듀퐁사의 크라우포드 그린월트와 레오나 우즈, 그리고 엔리코 페르미가 지켜보는 가운데 반응로가 가동되었다. 반응로의 연료는 페르미가 직접 집어넣었다. 가동 초기 반응로는 조정간과 냉각수 등에 문제가 있어 가동과 정지를 반복하였다."
example_title: "키워드 추출 Keyword Extraction"
- text: "질문 생성: 거족적인 저항<unused0>임진왜란은 1592년부터 1598년까지 2차에 걸쳐서 우리나라에 침입한 일본과의 싸움이다. 엄청난 시련을 겪으면서도 끈질긴 저항으로 이겨내고 각성과 자기성찰을 바탕으로 민족의 운명을 새로 개척해나간 계기가 된 전쟁이다. 명의 원조도 있었지만 승리의 가장 큰 원동력은 거족적인 저항으로, 이순신에 의한 제해권의 장악과 전국에서 봉기한 의병의 활동은 불리했던 전쟁 국면을 전환시킨 결정적인 힘이었다. 이 전란은 동아시아의 국제 정세를 크게 변화시키는 결과를 가져와, 명과 청이 교체되면서 병자호란이라는 시련을 예고하기도 했다. 조선이 임진왜란을 당하여 전쟁 초기 이를 감당하기 어려울 정도로 국력이 쇠약해진 것은 왜란이 일어난 선조대에 이르러서 비롯된 것은 아니었다. 이미 훨씬 이전부터 중쇠의 기운이 나타나기 시작하였다.정치적으로는 연산군 이후 명종대에 이르는 4대 사화와 훈구·사림 세력간에 계속된 정쟁으로 인한 중앙 정계의 혼란, 사림 세력이 득세한 선조 즉위 이후 격화된 당쟁 등으로 정치의 정상적인 운영을 수행하기 어려운 지경이었다.군사적으로도 조선 초기에 설치된 국방체제가 붕괴되어 외침에 대비하기 위한 방책으로 군국기무를 장악하는 비변사라는 합의 기관을 설치했으나, 이것 또한 정상적인 기능을 발휘하지 못하였다.이이는 남왜북호의 침입에 대처하기 위하여 십만양병설을 주장하기도 하였다. 그러나 국가 재정의 허약으로 뜻을 이루지 못하고, 사회는 점점 해이해지고 문약에 빠져 근본적인 국가 방책이 확립되지 못한 실정이었다.이러할 즈음 일본에서는 새로운 형세가 전개되고 있었다. 즉, 15세기 후반 서세동점에 따라 일본에는 유럽 상인들이 들어와 신흥 상업 도시가 발전되어 종래의 봉건적인 지배 형태가 위협받기 시작하였다."
example_title: "질문 생성 Question Generation"
---
# KoBART를 활용한 질문 생성 관련 Multitasking
Based on [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2). You can see the notebook on [Kaggle](https://www.kaggle.com/rycont/koquestionbart)
한국어 문단에서 의미있는 질문을 생성하기 위해 다음과 같은 태스크를 멀티태스크로 학습한 모델입니다.
- 문단에서 답변이 될 수 있는 키워드 추출
- 키워드를 답변으로 할 수 있는 문장 생성
## 사용 방법
### 키워드 추출
**입력**
> [키워드 갯수]\<unused1>[문단]
**출력**
> [키워드1]\<unused2>[키워드1]\<unused2>[키워드n...
### 질문 생성
**입력**
> [답변]\<unused0>[문단]
**출력**
> [질문 문장] |
sepidmnorozy/parsbert-finetuned-pos | ada6e7b6ab2d82a2d80586eaf85f510a4dcdee54 | 2022-04-12T09:57:27.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:udpos28",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | sepidmnorozy | null | sepidmnorozy/parsbert-finetuned-pos | 15 | null | transformers | 9,616 | ---
tags:
- generated_from_trainer
datasets:
- udpos28
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: parsbert-finetuned-pos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: udpos28
type: udpos28
args: fa
metrics:
- name: Precision
type: precision
value: 0.9447937270415372
- name: Recall
type: recall
value: 0.9486470191864382
- name: F1
type: f1
value: 0.9467164522465448
- name: Accuracy
type: accuracy
value: 0.9598951738759165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# parsbert-finetuned-pos
This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on the udpos28 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1385
- Precision: 0.9448
- Recall: 0.9486
- F1: 0.9467
- Accuracy: 0.9599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.122 | 1.0 | 3103 | 0.1215 | 0.9363 | 0.9424 | 0.9394 | 0.9561 |
| 0.0735 | 2.0 | 6206 | 0.1297 | 0.9413 | 0.9474 | 0.9443 | 0.9582 |
| 0.0373 | 3.0 | 9309 | 0.1385 | 0.9448 | 0.9486 | 0.9467 | 0.9599 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
SiriusRen/OH_my-rubbish-model | 6e462efdab835cb381676e57fd468c5f763f7c7e | 2022-04-14T09:35:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SiriusRen | null | SiriusRen/OH_my-rubbish-model | 15 | null | transformers | 9,617 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: OH_my-rubbish-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OH_my-rubbish-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
|
ccdv/lsg-distilcamembert-base-4096 | 7a01dbaaca7a958262d9a99ff6912a04f8b7deb3 | 2022-07-25T05:35:56.000Z | [
"pytorch",
"camembert",
"fill-mask",
"fr",
"transformers",
"long context",
"autotrain_compatible"
]
| fill-mask | false | ccdv | null | ccdv/lsg-distilcamembert-base-4096 | 15 | 1 | transformers | 9,618 | ---
language: fr
tags:
- long context
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is a small version of the [distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Support encoder-decoder but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-distilcamembert-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilcamembert-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-distilcamembert-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-distilcamembert-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilcamembert-base-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilcamembert-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilcamembert-base-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilcamembert-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilcamembert-base-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
``` |
ysharma/bert-finetuned-ner | edcb7f843376ad26576ca391d021d49fdc4eba30 | 2022-04-18T15:06:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ysharma | null | ysharma/bert-finetuned-ner | 15 | null | transformers | 9,619 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9327495042961005
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9413039853259965
- name: Accuracy
type: accuracy
value: 0.9860775887443339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0634
- Precision: 0.9327
- Recall: 0.9500
- F1: 0.9413
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0876 | 1.0 | 1756 | 0.0692 | 0.9127 | 0.9355 | 0.9240 | 0.9819 |
| 0.0316 | 2.0 | 3512 | 0.0651 | 0.9284 | 0.9490 | 0.9386 | 0.9850 |
| 0.0215 | 3.0 | 5268 | 0.0634 | 0.9327 | 0.9500 | 0.9413 | 0.9861 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
uw-madison/nystromformer-2048 | 23032017be28060be224adb388c366a2340f122f | 2022-04-18T16:27:47.000Z | [
"pytorch",
"nystromformer",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | uw-madison | null | uw-madison/nystromformer-2048 | 15 | null | transformers | 9,620 | Nystromformer for sequence length 2048 trained on WikiText-103 v1. |
SeNSiTivE/Learning-sentiment-analysis-through-imdb-ds | fbd65af213867e8899b230945e3b3542273e769d | 2022-04-19T11:21:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SeNSiTivE | null | SeNSiTivE/Learning-sentiment-analysis-through-imdb-ds | 15 | null | transformers | 9,621 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: Learning-sentiment-analysis-through-imdb-ds
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8817891373801918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Learning-sentiment-analysis-through-imdb-ds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3419
- Accuracy: 0.8767
- F1: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Intel/distilbert-base-uncased-finetuned-conll03-english-int8-static | 127e325bdd4352fc9c87c525e5ae8d8d7544e39d | 2022-06-10T02:40:14.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"token-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Intel | null | Intel/distilbert-base-uncased-finetuned-conll03-english-int8-static | 15 | null | transformers | 9,622 | ---
language:
- en
license: apache-2.0
tags:
- token-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- conll2003
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-conll03-english-int8-static
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: Conll2003
type: conll2003
metrics:
- name: Accuracy
type: accuracy
value: 0.9858650364082395
---
# INT8 distilbert-base-uncased-finetuned-conll03-english
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [elastic/distilbert-base-uncased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-uncased-finetuned-conll03-english).
The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-accuracy)** |0.9859|0.9882|
| **Model size (MB)** |64.5|253|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/distilbert-base-uncased-finetuned-conll03-english-int8-static',
)
```
|
Intel/bart-large-mrpc-int8-dynamic | cc68488634343a5b4da4a67797065263595f7498 | 2022-06-10T02:41:57.000Z | [
"pytorch",
"bart",
"text-classification",
"en",
"dataset:glue",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingDynamic",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Intel | null | Intel/bart-large-mrpc-int8-dynamic | 15 | null | transformers | 9,623 | ---
language:
- en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
datasets:
- glue
metrics:
- f1
model-index:
- name: bart-large-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.9050847457627118
---
# INT8 bart-large-mrpc
### Post-training dynamic quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [bart-large-mrpc](https://huggingface.co/Intel/bart-large-mrpc).
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9051|0.9120|
| **Model size (MB)** |547|1556.48|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/bart-large-mrpc-int8-dynamic',
)
```
|
clapika2010/soccer_predictions | 4e6bdeb3c7f647d261d4f60d61370fb9ebc2b6ea | 2022-04-22T19:31:02.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | clapika2010 | null | clapika2010/soccer_predictions | 15 | null | transformers | 9,624 | Entry not found |
praptishadmaan/finetuning-sentiment-model-3000-samples | 07c0b819bd9a98809c74018ed92e64c9287f4a36 | 2022-04-24T17:16:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | praptishadmaan | null | praptishadmaan/finetuning-sentiment-model-3000-samples | 15 | null | transformers | 9,625 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93192
- name: F1
type: f1
value: 0.9323583180987203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2345
- Accuracy: 0.9319
- F1: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Pavithra/autopilot-madgrad2_54 | 9160b593bf7480bbf6de2e0affc43c8b71b64942 | 2022-04-24T05:23:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Pavithra | null | Pavithra/autopilot-madgrad2_54 | 15 | null | transformers | 9,626 | Entry not found |
Akarsh3053/potter-chat-bot | 091f2fee869920fc84837a886af416976dace321 | 2022-04-24T06:55:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"chatBot"
]
| conversational | false | Akarsh3053 | null | Akarsh3053/potter-chat-bot | 15 | null | transformers | 9,627 | ---
tags:
- conversational
- chatBot
---
# Harry Potter DialoGPT Model |
jsoutherland/distilbert-base-uncased-finetuned-emotion | 6db4a918b0670c76ff231aefb23512ca2bc9893e | 2022-07-15T13:48:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jsoutherland | null | jsoutherland/distilbert-base-uncased-finetuned-emotion | 15 | null | transformers | 9,628 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model_index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metric:
name: F1
type: f1
value: 0.9327347950817506
model-index:
- name: jsoutherland/distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.925
verified: true
- name: Precision Macro
type: precision
value: 0.8954208010579672
verified: true
- name: Precision Micro
type: precision
value: 0.925
verified: true
- name: Precision Weighted
type: precision
value: 0.9256567173431012
verified: true
- name: Recall Macro
type: recall
value: 0.8711059962680445
verified: true
- name: Recall Micro
type: recall
value: 0.925
verified: true
- name: Recall Weighted
type: recall
value: 0.925
verified: true
- name: F1 Macro
type: f1
value: 0.8794773714607985
verified: true
- name: F1 Micro
type: f1
value: 0.925
verified: true
- name: F1 Weighted
type: f1
value: 0.9244781949774824
verified: true
- name: loss
type: loss
value: 0.17752596735954285
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
- Accuracy: 0.9325
- F1: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.2838 | 0.9065 | 0.9036 |
| No log | 2.0 | 500 | 0.1795 | 0.9255 | 0.9255 |
| No log | 3.0 | 750 | 0.1649 | 0.9325 | 0.9327 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 2.1.0
- Tokenizers 0.10.3
|
manueltonneau/bert-twitter-en-lost-job | 5770560a055e342e429a09d2443a057c05597bdb | 2022-04-26T15:58:32.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2203.09178",
"transformers"
]
| text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-en-lost-job | 15 | null | transformers | 9,629 | ---
language: en # <-- my language
widget:
- text: "Just lost my job..."
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Lost Job (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets where a user mentions that she lost her job in the past month. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user recently lost her job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
cogint/in-boxbart | 4c6812f8d97ab0b4dfeff5b21655cfbe710d0298 | 2022-04-27T00:22:39.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:2204.07600",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | cogint | null | cogint/in-boxbart | 15 | null | transformers | 9,630 | ---
license: mit
---
In-BoXBART
=============
An instruction-based unified model for performing various biomedical tasks.
You may want to check out
* Our paper (NAACL 2022 Findings): [In-BoXBART: Get Instructions into Biomedical Multi-Task Learning](https://arxiv.org/abs/2204.07600)
* GitHub: [Click Here](https://github.com/Mihir3009/In-BoXBART)
This work explores the impact of instructional prompts on biomedical Multi-Task Learning. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to
propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks.
How to Use
=============
You can very easily load the models with Transformers, instead of downloading them manually. The BART-base model is the backbone of our model. Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("cogint/in-boxbart")
model = AutoModelForSeq2SeqLM.from_pretrained("cogint/in-boxbart")
```
Or just clone the model repo
```
git lfs install
git clone https://huggingface.co/cogint/in-boxbart
```
BibTeX Entry and Citation Info
===============
If you are using our model, please cite our paper:
```bibtex
@article{parmar2022boxbart,
title={{In-BoXBART: Get Instructions into Biomedical Multi-Task Learning}},
author={Parmar, Mihir and Mishra, Swaroop and Purohit, Mirali and Luo, Man and Murad, M Hassan and Baral, Chitta},
journal={NAACL 2022 Findings},
year={2022}
}
``` |
it5/it5-efficient-small-el32-question-answering | 798064db0afbf348a21f66d999b7e3a31980d195 | 2022-04-29T14:28:58.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"transformers",
"Italian",
"efficient",
"sequence-to-sequence",
"squad_it",
"text2text-question-answering",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/it5-efficient-small-el32-question-answering | 15 | null | transformers | 9,631 | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- Italian
- efficient
- sequence-to-sequence
- squad_it
- text2text-question-answering
- text2text-generation
widget:
- text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"
- text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"
- text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"
- text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"
metrics:
- f1
- exact-match
model-index:
- name: it5-efficient-small-el32-question-answering
results:
- task:
type: question-answering
name: "Question Answering"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: f1
value: 0.747
name: "Test F1"
- type: exact-match
value: 0.645
name: "Test Exact Match"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for Question Answering ⁉️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
it5/it5-efficient-small-el32-wiki-summarization | 9bd8552c054a0ba59366d108bcaeb18dbaff7d68 | 2022-04-29T15:16:27.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:wits",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"italian",
"sequence-to-sequence",
"wikipedia",
"summarization",
"efficient",
"wits",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | it5 | null | it5/it5-efficient-small-el32-wiki-summarization | 15 | null | transformers | 9,632 | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- efficient
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.346
name: "Test Rouge1"
- type: rouge2
value: 0.196
name: "Test Rouge2"
- type: rougeL
value: 0.314
name: "Test RougeL"
- type: bertscore
value: 0.513
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
---
# IT5 Cased Small Efficient EL32 for Wikipedia Summarization 📑 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-wiki-summarization')
hg("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo.")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
Yehor/wav2vec2-xls-r-1b-uk-with-binary-news-lm | f43d0b1be48b4e776365144f85daa0dd1ccb72a3 | 2022-07-30T07:00:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0"
]
| automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-1b-uk-with-binary-news-lm | 15 | null | transformers | 9,633 | ---
language:
- uk
license: cc-by-nc-sa-4.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- uk
xdatasets:
- mozilla-foundation/common_voice_7_0
---
# Ukrainian STT model (with the Big Language Model formed on News Dataset)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
Attribution to the dataset of Language Model:
- Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
dragonSwing/viwav2vec2-base-3k | 4b455411460e8e8492db7c39d832f778de5d1a58 | 2022-05-17T15:15:13.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"vi",
"arxiv:2006.11477",
"transformers",
"speech",
"automatic-speech-recognition",
"license:cc-by-sa-4.0"
]
| automatic-speech-recognition | false | dragonSwing | null | dragonSwing/viwav2vec2-base-3k | 15 | 0 | transformers | 9,634 | ---
license: cc-by-sa-4.0
language: vi
tags:
- speech
- automatic-speech-recognition
---
# Wav2Vec2 base model trained of 3K hours of Vietnamese speech
The base model is pre-trained on 16kHz sampled speech audio from Vietnamese speech corpus containing 3K hours of spontaneous, reading, and broadcasting speech. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Facebook's Wav2Vec2 blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Paper](https://arxiv.org/abs/2006.11477)
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.
```python
import torch
from transformers import Wav2Vec2Model
model = Wav2Vec2Model.from_pretrained("dragonSwing/viwav2vec2-base-3k")
# Sanity check
inputs = torch.rand([1, 16000])
outputs = model(inputs)
``` |
datauma/bert-finetuned-ner | 9c2004bed7e925eb220c13118107eff7368f06cf | 2022-05-03T11:52:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | datauma | null | datauma/bert-finetuned-ner | 15 | null | transformers | 9,635 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9312510328871261
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9397148336529643
- name: Accuracy
type: accuracy
value: 0.9855624889621475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
- Precision: 0.9313
- Recall: 0.9483
- F1: 0.9397
- Accuracy: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.084 | 1.0 | 1756 | 0.0652 | 0.9203 | 0.9387 | 0.9294 | 0.9842 |
| 0.0387 | 2.0 | 3512 | 0.0589 | 0.9271 | 0.9504 | 0.9386 | 0.9853 |
| 0.0203 | 3.0 | 5268 | 0.0630 | 0.9313 | 0.9483 | 0.9397 | 0.9856 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ghost1/bert-base-uncased-finetuned_for_sentiment_analysis1-sst2 | f4148265c1ef20d83bf9632823c388024350368d | 2022-05-05T17:34:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Ghost1 | null | Ghost1/bert-base-uncased-finetuned_for_sentiment_analysis1-sst2 | 15 | null | transformers | 9,636 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned_for_sentiment_analysis1-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8853211009174312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned_for_sentiment_analysis1-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4723
- Accuracy: 0.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 0.3697 | 0.8544 |
| No log | 2.0 | 126 | 0.2904 | 0.8956 |
| No log | 3.0 | 189 | 0.4000 | 0.8830 |
| No log | 4.0 | 252 | 0.4410 | 0.8911 |
| No log | 5.0 | 315 | 0.4723 | 0.8853 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
peter2000/distilbert-base-uncased-finetuned-osdg | d9f3548e8c5f4417784dd073d1f2f91b2a4586a4 | 2022-05-25T11:50:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | peter2000 | null | peter2000/distilbert-base-uncased-finetuned-osdg | 15 | null | transformers | 9,637 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-osdg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-osdg
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8193
- F1 Score: 0.7962
- Accuracy: 0.8434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3769 | 1.0 | 1017 | 0.8258 | 0.7729 | 0.8257 |
| 0.2759 | 2.0 | 2034 | 0.8364 | 0.7773 | 0.8262 |
| 0.1412 | 3.0 | 3051 | 1.0203 | 0.7833 | 0.8379 |
| 0.1423 | 4.0 | 4068 | 1.1603 | 0.7683 | 0.8224 |
| 0.0939 | 5.0 | 5085 | 1.3029 | 0.7843 | 0.8329 |
| 0.0757 | 6.0 | 6102 | 1.3562 | 0.7931 | 0.8379 |
| 0.0801 | 7.0 | 7119 | 1.2925 | 0.7840 | 0.8395 |
| 0.0311 | 8.0 | 8136 | 1.4632 | 0.7750 | 0.8318 |
| 0.0263 | 9.0 | 9153 | 1.5760 | 0.7843 | 0.8312 |
| 0.0196 | 10.0 | 10170 | 1.5689 | 0.7890 | 0.8417 |
| 0.0313 | 11.0 | 11187 | 1.6034 | 0.7909 | 0.8417 |
| 0.0007 | 12.0 | 12204 | 1.6725 | 0.7889 | 0.8406 |
| 0.0081 | 13.0 | 13221 | 1.6463 | 0.7911 | 0.8395 |
| 0.0061 | 14.0 | 14238 | 1.7730 | 0.7861 | 0.8345 |
| 0.003 | 15.0 | 15255 | 1.8001 | 0.7847 | 0.8379 |
| 0.0002 | 16.0 | 16272 | 1.7328 | 0.7912 | 0.8434 |
| 0.0 | 17.0 | 17289 | 1.7914 | 0.8011 | 0.8489 |
| 0.0009 | 18.0 | 18306 | 1.7772 | 0.7958 | 0.8456 |
| 0.0 | 19.0 | 19323 | 1.8028 | 0.7958 | 0.8434 |
| 0.0 | 20.0 | 20340 | 1.8193 | 0.7962 | 0.8434 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_42 | cc15ff085432d596f3c9f5eb36a2b07fdb0efb0b | 2022-05-10T23:32:05.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | CEBaB | null | CEBaB/gpt2.CEBaB.sa.2-class.exclusive.seed_42 | 15 | null | transformers | 9,638 | Entry not found |
Wanjiru/ag_based_ner | f50e41de2b9e0022d9d73b94fddd484431a79697 | 2022-05-11T11:41:53.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Wanjiru | null | Wanjiru/ag_based_ner | 15 | null | transformers | 9,639 | Fine tuned recobo/agriculture-bert-uncased for custom NER entities. |
enoriega/kw_pubmed_10000_0.0003 | a3573e2f928f690712c563199f612a57dcefb757 | 2022-05-12T14:21:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | enoriega | null | enoriega/kw_pubmed_10000_0.0003 | 15 | null | transformers | 9,640 | Entry not found |
tanviraumi/meeting-summary | 73ad6cf93848732492830fefac80756500fea724 | 2022-05-13T23:09:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | tanviraumi | null | tanviraumi/meeting-summary | 15 | null | transformers | 9,641 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: meeting-summary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meeting-summary
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.12.0.dev20220513+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anwesham/imdb-sentiment-baseline-distilbert | ec16ec2953883c6700a9a91c7b59f01b22ffeb16 | 2022-05-14T03:58:39.000Z | [
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:anwesham/autotrain-data-imdb-sentiment-analysis",
"transformers"
]
| text-classification | false | anwesham | null | anwesham/imdb-sentiment-baseline-distilbert | 15 | null | transformers | 9,642 | ---
language: unk
datasets:
- anwesham/autotrain-data-imdb-sentiment-analysis
---
## Description
- Problem type: Binary Classification
## Validation Metrics
- Loss: 0.17481304705142975
- Accuracy: 0.936
- Precision: 0.9526578073089701
- Recall: 0.9176
- AUC: 0.9841454399999999
- F1: 0.93480032599837
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/anwesham/autotrain-imdb-sentiment-analysis-864927555
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927555", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927555", use_auth_token=True)
inputs = tokenizer("I love to eat good food and watch Moana.", return_tensors="pt")
outputs = model(**inputs)
``` |
kushaljoseph/bert-to-distilbert-NER | c65c2bd819d25fd216e571e216310caba3332785 | 2022-05-16T15:38:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kushaljoseph | null | kushaljoseph/bert-to-distilbert-NER | 15 | null | transformers | 9,643 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-to-distilbert-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-to-distilbert-NER
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.9063
- eval_precision: 0.0120
- eval_recall: 0.0069
- eval_f1: 0.0088
- eval_accuracy: 0.7600
- eval_runtime: 8.6309
- eval_samples_per_second: 376.671
- eval_steps_per_second: 3.012
- epoch: 1.0
- step: 110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00023888106906613202
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Tititun/consumer_category | 4b597908de6e07943cbfb1d889be8f81cc89ac5f | 2022-05-15T05:18:30.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | Tititun | null | Tititun/consumer_category | 15 | 1 | transformers | 9,644 | Entry not found |
ali-issa/3-wav2vec2-arabic-gpu-colab-similar-to-german-more-warm-2 | 9474e2229d2bea762f174a65b4bec22c2d10c700 | 2022-05-14T23:07:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | ali-issa | null | ali-issa/3-wav2vec2-arabic-gpu-colab-similar-to-german-more-warm-2 | 15 | null | transformers | 9,645 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-arabic-gpu-colab-similar-to-german-more-warm-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-arabic-gpu-colab-similar-to-german-more-warm-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Wer: 0.4369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.6025 | 2.83 | 400 | 3.0062 | 1.0 |
| 2.9581 | 5.67 | 800 | 2.8622 | 1.0 |
| 2.1171 | 8.51 | 1200 | 0.9307 | 0.7763 |
| 0.9365 | 11.35 | 1600 | 0.6720 | 0.5796 |
| 0.6464 | 14.18 | 2000 | 0.6406 | 0.5156 |
| 0.4829 | 17.02 | 2400 | 0.5887 | 0.4709 |
| 0.3823 | 19.85 | 2800 | 0.5968 | 0.4504 |
| 0.3201 | 22.69 | 3200 | 0.5826 | 0.4386 |
| 0.2797 | 25.53 | 3600 | 0.6159 | 0.4439 |
| 0.2553 | 28.37 | 4000 | 0.6168 | 0.4369 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
miyagawaorj/distilbert-base-uncased-finetuned-emotion | fba706e0f7e485c29e7f256fdfd17f0bac6c4940 | 2022-06-06T11:44:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | miyagawaorj | null | miyagawaorj/distilbert-base-uncased-finetuned-emotion | 15 | null | transformers | 9,646 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9425
- name: F1
type: f1
value: 0.9422011075095515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9425
- F1: 0.9422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4656 | 1.0 | 8000 | 0.2912 | 0.9365 | 0.9362 |
| 0.2046 | 2.0 | 16000 | 0.2285 | 0.9425 | 0.9422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
JoanTirant/bert-finetuned-ner | 03020eea055c576fd587bce1f34dc9c3672d08cc | 2022-05-17T10:40:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | JoanTirant | null | JoanTirant/bert-finetuned-ner | 15 | null | transformers | 9,647 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9363893041023086
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9425729332107331
- name: Accuracy
type: accuracy
value: 0.9855183375522458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0679
- Precision: 0.9364
- Recall: 0.9488
- F1: 0.9426
- Accuracy: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0884 | 1.0 | 1756 | 0.0662 | 0.9083 | 0.9317 | 0.9198 | 0.9824 |
| 0.04 | 2.0 | 3512 | 0.0613 | 0.9341 | 0.9493 | 0.9417 | 0.9856 |
| 0.0187 | 3.0 | 5268 | 0.0679 | 0.9364 | 0.9488 | 0.9426 | 0.9855 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
nqcccccc/phobert-social-media-text-classify | 4e7654c7428dfe43ebd7ce61a4c3a0240e87d9cf | 2022-05-20T08:06:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | nqcccccc | null | nqcccccc/phobert-social-media-text-classify | 15 | null | transformers | 9,648 | Entry not found |
connectivity/bert_ft_qqp-1 | 74eb955efe01aa28752278c6ac625dfb8d66e763 | 2022-05-21T16:31:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-1 | 15 | null | transformers | 9,649 | Entry not found |
connectivity/bert_ft_qqp-7 | 604fdcd389061d4a9bf972e4cb432a1bb9e11b2f | 2022-05-21T16:31:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | connectivity | null | connectivity/bert_ft_qqp-7 | 15 | null | transformers | 9,650 | Entry not found |
Dani-91/bert-finetuned-ner | f1049b82a2d4af20d2fe5c954c669424fe929613 | 2022-05-21T13:25:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Dani-91 | null | Dani-91/bert-finetuned-ner | 15 | null | transformers | 9,651 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9325062034739454
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.9405188954700927
- name: Accuracy
type: accuracy
value: 0.9859745687878966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
- Precision: 0.9325
- Recall: 0.9487
- F1: 0.9405
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0874 | 1.0 | 1756 | 0.0645 | 0.9194 | 0.9382 | 0.9287 | 0.9835 |
| 0.0384 | 2.0 | 3512 | 0.0614 | 0.9297 | 0.9463 | 0.9379 | 0.9845 |
| 0.0186 | 3.0 | 5268 | 0.0618 | 0.9325 | 0.9487 | 0.9405 | 0.9860 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
aspis/gpt2-genre-story-generation | 101975f472682ab718168d84ac1902e065e90848 | 2022-05-23T10:36:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:apache-2.0"
]
| text-generation | false | aspis | null | aspis/gpt2-genre-story-generation | 15 | null | transformers | 9,652 | ---
language:
- en
tags:
- text-generation
license: apache-2.0
---
# GPT-2 fine-tuned for short story generation
Gpt-2 for short story generation with genres.
## Model description
Gpt-2 model fine-tuned on sample of BookCorpus dataset for short story generation, allows for the following genres (tokens to use as input under parenthesis):
- Romance (romance)
- Adventure (adventure)
- Mystery & detective (mystery-&-detective)
- Fantasy (fantasy)
- Humor & comedy (humor-&-comedy)
- Paranormal (paranormal)
- Science fiction (science-fiction)
Heavily inspired by https://huggingface.co/pranavpsv
## Intended uses & limitations
This can be used for text generation.
### How to use:
```python
>>> from transformers import pipeline, TextGenerationPipeline, GPT2LMHeadModel, AutoTokenizer
>>> model_name = "aspis/gpt2-genre-story-generation"
>>> model = GPT2LMHeadModel.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> generator = TextGenerationPipeline(model=model, tokenizer=tokenizer)
# Input should be of format "<BOS> <Genre token> Optional starter text"
>>> input_prompt = "<BOS> <adventure>"
>>> story = generator(input_prompt, max_length=80, do_sample=True,
repetition_penalty=1.5, temperature=1.2,
top_p=0.95, top_k=50)
>>> print(story)
[{'generated_text': '<BOS> <adventure> "How come they got that one?" asked Louran. The leader of the House, a young man with blonde hair and an odd grin...that didn\'t look so bad to her if she did have a smile on its face. She had known about this before. And now he\'d admitted it himself;'}]
```
## Training data
The model was trained using the BookCorpus dataset by getting the different genres per book and dividing the text into paragraphs.
|
BM-K/KoSimCSE-bert | e479c50e3cba18fc557207f856d12a5b2e456b3e | 2022-06-03T01:47:13.000Z | [
"pytorch",
"bert",
"feature-extraction",
"ko",
"transformers",
"korean"
]
| feature-extraction | false | BM-K | null | BM-K/KoSimCSE-bert | 15 | 1 | transformers | 9,653 | ---
language: ko
tags:
- korean
---
https://github.com/BM-K/Sentence-Embedding-is-all-you-need
# Korean-Sentence-Embedding
🍭 Korean sentence embedding repository. You can download the pre-trained models and inference right away, also it provides environments where individuals can train models.
## Quick tour
```python
import torch
from transformers import AutoModel, AutoTokenizer
def cal_score(a, b):
if len(a.shape) == 1: a = a.unsqueeze(0)
if len(b.shape) == 1: b = b.unsqueeze(0)
a_norm = a / a.norm(dim=1)[:, None]
b_norm = b / b.norm(dim=1)[:, None]
return torch.mm(a_norm, b_norm.transpose(0, 1)) * 100
model = AutoModel.from_pretrained('BM-K/KoSimCSE-bert')
AutoTokenizer.from_pretrained('BM-K/KoSimCSE-bert')
sentences = ['치타가 들판을 가로 질러 먹이를 쫓는다.',
'치타 한 마리가 먹이 뒤에서 달리고 있다.',
'원숭이 한 마리가 드럼을 연주한다.']
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
embeddings, _ = model(**inputs, return_dict=False)
score01 = cal_score(embeddings[0][0], embeddings[1][0])
score02 = cal_score(embeddings[0][0], embeddings[2][0])
```
## Performance
- Semantic Textual Similarity test set results <br>
| Model | AVG | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman |
|------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| KoSBERT<sup>†</sup><sub>SKT</sub> | 77.40 | 78.81 | 78.47 | 77.68 | 77.78 | 77.71 | 77.83 | 75.75 | 75.22 |
| KoSBERT | 80.39 | 82.13 | 82.25 | 80.67 | 80.75 | 80.69 | 80.78 | 77.96 | 77.90 |
| KoSRoBERTa | 81.64 | 81.20 | 82.20 | 81.79 | 82.34 | 81.59 | 82.20 | 80.62 | 81.25 |
| | | | | | | | | |
| KoSentenceBART | 77.14 | 79.71 | 78.74 | 78.42 | 78.02 | 78.40 | 78.00 | 74.24 | 72.15 |
| KoSentenceT5 | 77.83 | 80.87 | 79.74 | 80.24 | 79.36 | 80.19 | 79.27 | 72.81 | 70.17 |
| | | | | | | | | |
| KoSimCSE-BERT<sup>†</sup><sub>SKT</sub> | 81.32 | 82.12 | 82.56 | 81.84 | 81.63 | 81.99 | 81.74 | 79.55 | 79.19 |
| KoSimCSE-BERT | 83.37 | 83.22 | 83.58 | 83.24 | 83.60 | 83.15 | 83.54 | 83.13 | 83.49 |
| KoSimCSE-RoBERTa | 83.65 | 83.60 | 83.77 | 83.54 | 83.76 | 83.55 | 83.77 | 83.55 | 83.64 |
| | | | | | | | | | |
| KoSimCSE-BERT-multitask | 85.71 | 85.29 | 86.02 | 85.63 | 86.01 | 85.57 | 85.97 | 85.26 | 85.93 |
| KoSimCSE-RoBERTa-multitask | 85.77 | 85.08 | 86.12 | 85.84 | 86.12 | 85.83 | 86.12 | 85.03 | 85.99 | |
LianZhang/finetuning-sentiment-model-3000-samples | cac56de27e297ce641002cbea567d9511a414b21 | 2022-07-13T22:32:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | LianZhang | null | LianZhang/finetuning-sentiment-model-3000-samples | 15 | null | transformers | 9,654 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8754208754208754
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3182
- Accuracy: 0.8767
- F1: 0.8754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
neuralmagic/oBERT-3-upstream-pretrained-dense | 0609a50ec6a3ccd7c3b103141838ac352884c693 | 2022-06-20T11:36:52.000Z | [
"pytorch",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
]
| null | false | neuralmagic | null | neuralmagic/oBERT-3-upstream-pretrained-dense | 15 | null | null | 9,655 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-3-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to 3 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 3 Layers`.
The model can also be used for finetuning on any downstream task, as a starting point instead of the three times larger `bert-base-uncased` model.
Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
- 0%: `neuralmagic/oBERT-3-downstream-dense-squadv1`
- 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
```
Training objective: masked language modeling (MLM) + knowledge distillation
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 3
```
Code: _coming soon_
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
tartuNLP/mtee-domain-detection | b0203dd9ab497de587c50f064a0d7e381c67ed1c | 2022-05-26T22:38:39.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"et",
"en",
"ru",
"de",
"transformers"
]
| text-classification | false | tartuNLP | null | tartuNLP/mtee-domain-detection | 15 | null | transformers | 9,656 | ---
language:
- et
- en
- ru
- de
tags:
- text-classification
widget:
- text: "Täna lõppes Valgamaa õppuse Siil aktiivne lahingutegevus, mille käigus pidi täielikult formeeritud 2. jalaväebrigaad kaitsma end vastase pealetungi eest."
---
A domain detection model for the MTee machine translation platform. The platform was developed in 2021 as a collaboration between the [TartuNLP](https://tartunlp.ai), the NLP research group at the University of Tartu, and [Tilde](https://tilde.com). More information about the project can be found [here](https://github.com/Project-MTee/mtee-platform/wiki).
#### Model Description
The model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). It classifies the input sentence into one of the following four domains: `general`, `crisis`, `legal`, `military`. |
Rebreak/autotrain-News-916530070 | b77d4199c92582eca460978433b96defb7f3f547 | 2022-05-27T05:12:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Rebreak/autotrain-data-News",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Rebreak | null | Rebreak/autotrain-News-916530070 | 15 | null | transformers | 9,657 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Rebreak/autotrain-data-News
co2_eq_emissions: 62.61326668998836
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 916530070
- CO2 Emissions (in grams): 62.61326668998836
## Validation Metrics
- Loss: 0.0855042040348053
- Accuracy: 0.9773220921733938
- Precision: 0.673469387755102
- Recall: 0.014864864864864866
- AUC: 0.8605107881181646
- F1: 0.029087703834288235
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Rebreak/autotrain-News-916530070
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Rebreak/autotrain-News-916530070", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Rebreak/autotrain-News-916530070", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Jeevesh8/init_bert_ft_qqp-44 | e91e1d668d5362aab64271c3c2ff620674b9cecf | 2022-06-02T12:39:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-44 | 15 | null | transformers | 9,658 | Entry not found |
Ce/bert-finetuned-ner | a2d8c285614dc341e0303c63ead014655c3c774b | 2022-06-02T14:29:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Ce | null | Ce/bert-finetuned-ner | 15 | null | transformers | 9,659 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9329581195166363
- name: Recall
type: recall
value: 0.9485021878155503
- name: F1
type: f1
value: 0.9406659434198448
- name: Accuracy
type: accuracy
value: 0.985356449049273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0649
- Precision: 0.9330
- Recall: 0.9485
- F1: 0.9407
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0871 | 1.0 | 1756 | 0.0672 | 0.9209 | 0.9387 | 0.9297 | 0.9834 |
| 0.0394 | 2.0 | 3512 | 0.0584 | 0.9311 | 0.9505 | 0.9407 | 0.9857 |
| 0.0201 | 3.0 | 5268 | 0.0649 | 0.9330 | 0.9485 | 0.9407 | 0.9854 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-base-spanish-finetuned-squad | dea97c61d5db921e2b629e9abee91cc0b851070b | 2022-06-06T06:22:06.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/legalectra-base-spanish-finetuned-squad | 15 | null | transformers | 9,660 | ---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: legalectra-base-spanish-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-base-spanish-finetuned-squad
This model is a fine-tuned version of [mrm8488/legalectra-base-spanish](https://huggingface.co/mrm8488/legalectra-base-spanish) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.9506 |
| No log | 2.0 | 6 | 5.9506 |
| No log | 3.0 | 9 | 5.9506 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Anery/legalbert_beneficiary_single | f32800e877872680e587d3f34cde105278ddabaf | 2022-06-08T06:45:36.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Anery | null | Anery/legalbert_beneficiary_single | 15 | null | transformers | 9,661 | Entry not found |
candra/punctuatorid | 97023ae7368a92be8597922baa61e9c25359db42 | 2022-06-09T08:18:01.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| token-classification | false | candra | null | candra/punctuatorid | 15 | null | transformers | 9,662 | ---
license: afl-3.0
---
|
ghadeermobasher/Original-SciBERT-BC5CDR-Chemical | c7722f5b2a261da89ff229bb0244dfd68cc91f4b | 2022-06-09T12:19:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC5CDR-Chemical | 15 | null | transformers | 9,663 | Entry not found |
ghadeermobasher/Original-PubMedBERT-BC4CHEMD | 9e5eb97a1577ac32e8a1902d15f6c202f5dd087b | 2022-06-09T12:39:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-PubMedBERT-BC4CHEMD | 15 | null | transformers | 9,664 | Entry not found |
ghadeermobasher/Original-BlueBERT-BC4CHEMD | 432744847a43e7ad5dcb4497232852c6ab28f0b3 | 2022-06-09T17:16:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-BC4CHEMD | 15 | null | transformers | 9,665 | Entry not found |
ghadeermobasher/Original-BlueBERT-BC2GM | 10e28baadfaeab55ce70934916845469d173272f | 2022-06-09T14:11:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-BC2GM | 15 | null | transformers | 9,666 | Entry not found |
ghadeermobasher/Original-PubMedBERT-BC2GM | f3a086e6149e65493c518d837f477d115c9eec96 | 2022-06-10T16:57:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-PubMedBERT-BC2GM | 15 | null | transformers | 9,667 | Entry not found |
ghadeermobasher/Original-SciBERT-BC2GM | e2a432b4ee1a072a603ef0274abd12056dc1a51d | 2022-06-09T16:41:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC2GM | 15 | null | transformers | 9,668 | Entry not found |
ghadeermobasher/Original-BlueBERT-Linnaeus | c3354669dfd2efe02443a563daecf818b148e66b | 2022-06-10T14:41:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-Linnaeus | 15 | null | transformers | 9,669 | Entry not found |
ghadeermobasher/Original-SciBERT-Linnaeus | cfa3d3a835004e589cb06af3f9808dbb25f4933c | 2022-06-10T14:17:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-Linnaeus | 15 | null | transformers | 9,670 | Entry not found |
ghadeermobasher/Original-SciBERT-BC5CDR-Chemical-T | 7002a4d88a34a017f086add1ec8915ac69d8f71e | 2022-06-09T18:04:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC5CDR-Chemical-T | 15 | null | transformers | 9,671 | Entry not found |
ghadeermobasher/Original-SciBERT-BC5CDR-Chemical-T1 | b6fb6dce7e12bcc9ba688e330d7ea4ac02b876a8 | 2022-06-09T18:15:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC5CDR-Chemical-T1 | 15 | null | transformers | 9,672 | Entry not found |
rsuwaileh/IDRISI-LMR-HD-TL-partition | 0d76606a580dbc0e27009cc0f2cdb1e7553b2c70 | 2022-07-18T09:16:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | rsuwaileh | null | rsuwaileh/IDRISI-LMR-HD-TL-partition | 15 | null | transformers | 9,673 | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training data is used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-less LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
QCRI/bert-base-cased-sem | a44a4cc969b3ce93bae590cd03aa9ab5a42b286c | 2022-06-13T06:02:07.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
]
| token-classification | false | QCRI | null | QCRI/bert-base-cased-sem | 15 | null | transformers | 9,674 | ---
license: cc-by-nc-4.0
---
|
ghadeermobasher/BC4CHEMD-Chem-Original-BlueBERT-512 | 982beaef243c5fd73e860efa6144d5c2b03145f9 | 2022-06-14T10:13:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-BlueBERT-512 | 15 | null | transformers | 9,675 | Entry not found |
Seema09/finetuning-sentiment-model-Test | f7e2f2033d6c011220549de8670f844b353c6382 | 2022-06-16T13:25:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Seema09 | null | Seema09/finetuning-sentiment-model-Test | 15 | null | transformers | 9,676 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-Test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.904
- name: F1
type: f1
value: 0.9047619047619047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-Test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2703
- Accuracy: 0.904
- F1: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aditya22/bert-finetuned-ner | 720faba12b79e083155ff45b7bae2a1ea5b7faea | 2022-06-17T07:18:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | aditya22 | null | aditya22/bert-finetuned-ner | 15 | null | transformers | 9,677 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.936018564561578
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9431315240083508
- name: Accuracy
type: accuracy
value: 0.9859598516512628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0642
- Precision: 0.9360
- Recall: 0.9504
- F1: 0.9431
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0855 | 1.0 | 1756 | 0.0642 | 0.9108 | 0.9387 | 0.9246 | 0.9834 |
| 0.0414 | 2.0 | 3512 | 0.0619 | 0.9331 | 0.9502 | 0.9415 | 0.9853 |
| 0.0181 | 3.0 | 5268 | 0.0642 | 0.9360 | 0.9504 | 0.9431 | 0.9860 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Nonzerophilip/bert-finetuned-ner_swedish_test_NUMb_2 | f09f63214190a7c5f7c5a04d3da6ad5c937dd281 | 2022-06-17T12:12:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Nonzerophilip | null | Nonzerophilip/bert-finetuned-ner_swedish_test_NUMb_2 | 15 | null | transformers | 9,678 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_swedish_test_NUMb_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_test_NUMb_2
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0676
- Precision: 0.75
- Recall: 0.7179
- F1: 0.7336
- Accuracy: 0.9811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 128 | 0.0637 | 0.7477 | 0.6838 | 0.7143 | 0.9816 |
| No log | 2.0 | 256 | 0.0642 | 0.7304 | 0.7179 | 0.7241 | 0.9803 |
| No log | 3.0 | 384 | 0.0676 | 0.75 | 0.7179 | 0.7336 | 0.9811 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jeevesh8/std_0pnt2_bert_ft_cola-7 | a23ee45d9608c80211e627db57635f02e88f11a5 | 2022-06-21T13:28:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-7 | 15 | null | transformers | 9,679 | Entry not found |
raphaelsty/semanlink_all_mpnet_base_v2 | ca788f7899e7958a200c78f3abfd302517a390ae | 2022-06-28T09:28:05.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"en",
"fr",
"sentence-transformers",
"sentence-similarity",
"license:apache-2.0"
]
| sentence-similarity | false | raphaelsty | null | raphaelsty/semanlink_all_mpnet_base_v2 | 15 | null | sentence-transformers | 9,680 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language:
- en
- fr
license: apache-2.0
---
## `semanlink_all_mpnet_base_v2`
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
`semanlink_all_mpnet_base_v2` has been fine-tuned on the knowledge graph [Semanlink](http://www.semanlink.net/sl/home?lang=fr) via the library [MKB](https://github.com/raphaelsty/mkb) on the link-prediction task. The model is dedicated to the representation of both technical and generic terminology in machine learning, NLP, news.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Machine Learning", "Geoffrey Hinton"]
model = SentenceTransformer('raphaelsty/semanlink_all_mpnet_base_v2')
embeddings = model.encode(sentences)
print(embeddings)
``` |
Laure996/bert-finetuned-ner | 500eb7ea67e026d6160dc4565aa656537f045e5d | 2022-06-27T10:00:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Laure996 | null | Laure996/bert-finetuned-ner | 15 | null | transformers | 9,681 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9329136988570482
- name: Recall
type: recall
value: 0.9478290138000673
- name: F1
type: f1
value: 0.9403122130394858
- name: Accuracy
type: accuracy
value: 0.9855477718255137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0663
- Precision: 0.9329
- Recall: 0.9478
- F1: 0.9403
- Accuracy: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0837 | 1.0 | 1756 | 0.0656 | 0.9151 | 0.9392 | 0.9270 | 0.9834 |
| 0.0388 | 2.0 | 3512 | 0.0619 | 0.9249 | 0.9475 | 0.9361 | 0.9855 |
| 0.0198 | 3.0 | 5268 | 0.0663 | 0.9329 | 0.9478 | 0.9403 | 0.9855 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
projecte-aina/roberta-base-ca-v2-cased-tc | b629591dc6762ce358e72f9d1640ea8966f19ca8 | 2022-07-25T06:50:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"ca",
"dataset:projecte-aina/tecla",
"arxiv:1907.11692",
"transformers",
"catalan",
"text classification",
"tecla",
"CaText",
"Catalan Textual Corpus",
"model-index"
]
| text-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2-cased-tc | 15 | null | transformers | 9,682 | ---
language:
- ca
tags:
- "catalan"
- "text classification"
- "tecla"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/tecla"
metrics:
- accuracy
model-index:
- name: roberta-base-ca-v2-cased-tc
results:
- task:
type: text-classification
dataset:
name: TeCla
type: projecte-aina/tecla
metrics:
- name: Accuracy
type: accuracy
value: 0.7426
widget:
- text: "Els Pets presenten el seu nou treball al Palau Sant Jordi."
- text: "Els barcelonins incrementen un 23% l’ús del cotxe des de l’inici de la pandèmia."
- text: "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."
- text: "Majors de 60 anys i sanitaris començaran a rebre la tercera dosi de la vacuna covid els propers dies."
- text: "Els cinemes Verdi estrenen Verdi Classics, un nou canal de televisió."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Text Classification.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-tc** is a Text Classification (TC) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-tc** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.
## How to Use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-tc")
example = "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."
tc_results = nlp(example)
pprint(tc_results)
```
## Training
### Training data
We used the TC dataset in Catalan called [TeCla](https://huggingface.co/datasets/projecte-aina/tecla) for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing accuracy.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-tc_ on the TeCla test set against standard multilingual and monolingual baselines:
| Model | TeCla (Accuracy) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-tc | **74.26** |
| roberta-base-ca-cased-tc | 73.65 |
| mBERT | 69.90 |
| XLM-RoBERTa | 70.14 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A] |
emen/distilbert-base-uncased-finetuned-emotion | 14ca05ff278d294dcfa855991fc1a881757c004a | 2022-06-30T12:17:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | emen | null | emen/distilbert-base-uncased-finetuned-emotion | 15 | null | transformers | 9,683 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9297561758557029
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.9295
- F1: 0.9298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8495 | 1.0 | 250 | 0.3141 | 0.9085 | 0.9060 |
| 0.2511 | 2.0 | 500 | 0.2181 | 0.9295 | 0.9298 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Luojike/autotrain-test-4-macbert-1071837613 | d1cbf0c6b2e6b5702f5be3abc9b139238c9422f1 | 2022-07-01T15:45:50.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:Luojike/autotrain-data-test-4-macbert",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Luojike | null | Luojike/autotrain-test-4-macbert-1071837613 | 15 | null | transformers | 9,684 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Luojike/autotrain-data-test-4-macbert
co2_eq_emissions: 0.012225117907336358
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1071837613
- CO2 Emissions (in grams): 0.012225117907336358
## Validation Metrics
- Loss: 0.533202052116394
- Accuracy: 0.7408088235294118
- Precision: 0.5072463768115942
- Recall: 0.4088785046728972
- AUC: 0.710585043624057
- F1: 0.4527813712807245
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Luojike/autotrain-test-4-macbert-1071837613
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Luojike/autotrain-test-4-macbert-1071837613", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Luojike/autotrain-test-4-macbert-1071837613", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
FabianWillner/bert-base-uncased-finetuned-squad-finetuned-triviaqa | 5b78483053301a335e2bf0935c9e8ca22df11a4b | 2022-07-02T11:26:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | FabianWillner | null | FabianWillner/bert-base-uncased-finetuned-squad-finetuned-triviaqa | 15 | null | transformers | 9,685 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-squad-finetuned-triviaqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad-finetuned-triviaqa
This model is a fine-tuned version of [FabianWillner/bert-base-uncased-finetuned-squad](https://huggingface.co/FabianWillner/bert-base-uncased-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9087 | 1.0 | 11195 | 0.8906 |
| 0.6533 | 2.0 | 22390 | 0.9132 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
steven123/Check_GoodBad_Teeth | 4734185760da914ecbed1e8bf26a0a92e440bcdc | 2022-07-05T03:52:40.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | steven123 | null | steven123/Check_GoodBad_Teeth | 15 | null | transformers | 9,686 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Check_GoodBad_Teeth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# Check_GoodBad_Teeth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bad Teeth

#### Good Teeth
 |
ghadeermobasher/Original-BlueBERT-BioRED-Chem | fb4acad62b0f87d3f896c92d5b78abdb821bec7c | 2022-07-06T15:04:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-BlueBERT-BioRED-Chem | 15 | null | transformers | 9,687 | Entry not found |
ScarlettSun9/autotrain-ZuoZhuan-1100540143 | 05fd59835631b55becb980b296d5d3799b475380 | 2022-07-07T07:11:00.000Z | [
"pytorch",
"roberta",
"token-classification",
"unk",
"dataset:ScarlettSun9/autotrain-data-ZuoZhuan",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| token-classification | false | ScarlettSun9 | null | ScarlettSun9/autotrain-ZuoZhuan-1100540143 | 15 | null | transformers | 9,688 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ScarlettSun9/autotrain-data-ZuoZhuan
co2_eq_emissions: 14.50120424968173
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1100540143
- CO2 Emissions (in grams): 14.50120424968173
## Validation Metrics
- Loss: 0.3792617619037628
- Accuracy: 0.8799234894798035
- Precision: 0.8133982801130555
- Recall: 0.8416925948973242
- F1: 0.8273035872656656
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ScarlettSun9/autotrain-ZuoZhuan-1100540143
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ScarlettSun9/autotrain-ZuoZhuan-1100540143", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ScarlettSun9/autotrain-ZuoZhuan-1100540143", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
danielreales00/results | 35c9e7e82449883915d4f88297340c6986763475 | 2022-07-10T19:13:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | danielreales00 | null | danielreales00/results | 15 | null | transformers | 9,689 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
luffycodes/t5_small_v1_bb | 8a4ea7dc2b457061e4c981e8748651b4e6801421 | 2022-07-11T08:11:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | luffycodes | null | luffycodes/t5_small_v1_bb | 15 | null | transformers | 9,690 | Entry not found |
agarwalchaitanya/muril-unified-ei-infotabs-btnli | c1f1543bb3f03bd0cb40f9276b64afa9fdab25d3 | 2022-07-11T19:46:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | agarwalchaitanya | null | agarwalchaitanya/muril-unified-ei-infotabs-btnli | 15 | null | transformers | 9,691 | ---
license: apache-2.0
---
|
Chirayu/mt5-multilingual-sentiment | 2823503f4d2ac52f228c1ff061b891a6abea77ab | 2022-07-12T10:24:23.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Chirayu | null | Chirayu/mt5-multilingual-sentiment | 15 | null | transformers | 9,692 | # This model predicts the sentiment('Negative'/'Positive') for the input sentence. It is fine-tuned mt5-small
The present model supports 6 languages -
1) English
2) Hindi
3) German
4) Korean
5) Japanese
6) Portuguese
Here is how to use this model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("Chirayu/mt5-multilingual-sentiment")
tokenizer = AutoTokenizer.from_pretrained("Chirayu/mt5-multilingual-sentiment")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
def get_sentiment(text, num_beams=2,max_length=512, repetition_penalty=2.5, length_penalty=1, early_stopping=True,top_p=.95, top_k=50, num_return_sequences=1):
input_ids = tokenizer.encode(
text, return_tensors="pt", add_special_tokens=True
)
input_ids = input_ids.to(device)
generated_ids = model.generate(
input_ids=input_ids,
num_beams=num_beams,
max_length=max_length,
repetition_penalty=repetition_penalty,
length_penalty=length_penalty,
early_stopping=early_stopping,
top_p=top_p,
top_k=top_k,
num_return_sequences=num_return_sequences,
)
sentiment = [tokenizer.decode(generated_id,skip_special_tokens=True,clean_up_tokenization_spaces=True,) for generated_id in generated_ids]
return sentiment
``` |
morenolq/thext-cs-scibert | 76a6fb0ce53c390002a615e3359fe2d2fd331627 | 2022-07-13T16:59:05.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"regression"
]
| text-classification | false | morenolq | null | morenolq/thext-cs-scibert | 15 | null | transformers | 9,693 | ---
language: "en"
tags:
- bert
- regression
- pytorch
pipeline:
- text-classification
widget:
- text: "We propose a new approach, based on Transformer-based encoding, to highlight extraction. To the best of our knowledge, this is the first attempt to use transformer architectures to address automatic highlight generation. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "We design a context-aware sentence-level regressor, in which the semantic similarity between candidate sentences and highlights is estimated by also attending the contextual knowledge provided by the other paper sections. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
- text: "Fig. 2, Fig. 3, Fig. 4 show the effect of varying the number K of selected highlights on the extraction performance. As expected, recall values increase while increasing the number of selected highlights, whereas precision values show an opposite trend. [SEP] Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art."
---
# General Information
This model is trained on journal publications of belonging to the domain: **Computer Science**.
This is an `allenai/scibert_scivocab_cased` model trained in the scientific domain. The model is trained with regression objective to estimate the relevance of a sentence according to the provided context (e.g., the abstract of the scientific paper).
The model is used in the paper 'Transformer-based highlights extraction from scientific papers' published in Knowledge-Based Systems scientific journal.
The model is able to achieve state-of-the-art performance in the task of highlights extraction from scientific papers.
Access to the full paper: [here](https://doi.org/10.1016/j.knosys.2022.109382).
# Usage:
For detailed usage please use the official repository https://github.com/MorenoLaQuatra/THExt .
# References:
If you find it useful, please cite the following paper:
```bibtex
@article{thext,
title={Transformer-based highlights extraction from scientific papers},
author={La Quatra, Moreno and Cagliero, Luca},
journal={Knowledge-Based Systems},
pages={109382},
year={2022},
publisher={Elsevier}
}
``` |
jgriffi/bart_abstract_summarization | d1f31aff41111ae819df6938e87087894c7b7b0f | 2022-07-14T12:28:07.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | jgriffi | null | jgriffi/bart_abstract_summarization | 15 | null | transformers | 9,694 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart_abstract_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_abstract_summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0559 | 0.25 | 500 | 0.1601 |
| 0.0068 | 0.49 | 1000 | 0.2571 |
| 0.0016 | 0.74 | 1500 | 0.4330 |
| 0.0001 | 0.99 | 2000 | 0.1852 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nvidia/speakerverification_en_titanet_large | 3c99844ecb1a732dde2d438f762068a4aa6a72ab | 2022-07-15T19:38:45.000Z | [
"nemo",
"en",
"dataset:VOXCELEB-1",
"dataset:VOXCELEB-2",
"dataset:FISHER",
"dataset:switchboard",
"dataset:librispeech_asr",
"dataset:SRE (2004-2010)",
"speaker",
"speech",
"audio",
"speaker-verification",
"speaker-recognition",
"speaker-diarization",
"titanet",
"NeMo",
"pytorch",
"license:cc-by-4.0",
"model-index"
]
| null | false | nvidia | null | nvidia/speakerverification_en_titanet_large | 15 | 1 | nemo | 9,695 | ---
language:
- en
library_name: nemo
datasets:
- VOXCELEB-1
- VOXCELEB-2
- FISHER
- switchboard
- librispeech_asr
- SRE (2004-2010)
thumbnail: null
tags:
- speaker
- speech
- audio
- speaker-verification
- speaker-recognition
- speaker-diarization
- titanet
- NeMo
- pytorch
license: cc-by-4.0
widget:
- src: https://huggingface.co/nvidia/speakerverification_en_titanet_large/resolve/main/an255-fash-b.wav
example_title: Speech sample 1
- src: https://huggingface.co/nvidia/speakerverification_en_titanet_large/resolve/main/cen7-fash-b.wav
example_title: Speech sample 2
model-index:
- name: speakerverification_en_titanet_large
results:
- task:
name: Speaker Verification
type: speaker-verification
dataset:
name: voxceleb1
type: voxceleb1-O
config: clean
split: test
args:
language: en
metrics:
- name: Test EER
type: eer
value: 0.66
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: ami-mixheadset
type: ami_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 1.73
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: ami-lapel
type: ami_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 2.03
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: ch109
type: callhome_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 1.19
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: nist-sre-2000
type: nist-sre_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 6.73
---
# NVIDIA TitaNet-Large (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model extracts speaker embeddings from given speech, which is the backbone for speaker verification and diarization tasks.
It is a "large" version of TitaNet (around 23M parameters) models.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speaker_recognition/models.html#titanet) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
speaker_model = nemo_asr.models.EncDecSpeakerLabelModel.from_pretrained("nvidia/speakerverification_en_titanet_large")
```
### Embedding Extraction
Using
```python
emb = speaker_model.get_embedding("an255-fash-b.wav")
```
### Verifying two utterances (Speaker Verification)
Now to check if two audio files are from the same speaker or not, simply do:
```python
speaker_model.verify_speakers("an255-fash-b.wav","cen7-fash-b.wav")
```
### Extracting Embeddings for more audio files
To extract embeddings from a bunch of audio files:
Write audio files to a `manifest.json` file with lines as in format:
```json
{"audio_filepath": "<absolute path to dataset>/audio_file.wav", "duration": "duration of file in sec", "label": "speaker_id"}
```
Then running following script will extract embeddings and writes to current working directory:
```shell
python <NeMo_root>/examples/speaker_tasks/recognition/extract_speaker_embeddings.py --manifest=manifest.json
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides speaker embeddings for an audio file.
## Model Architecture
TitaNet model is a depth-wise separable conv1D model [1] for Speaker Verification and diarization tasks. You may find more info on the detail of this model here: [TitaNet-Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/speaker_reco.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/conf/titanet-large.yaml).
### Datasets
All the models in this collection are trained on a composite dataset comprising several thousand hours of English speech:
- Voxceleb-1
- Voxceleb-2
- Fisher
- Switchboard
- Librispeech
- SRE (2004-2010)
## Performance
Performances of the these models are reported in terms of Equal Error Rate (EER%) on speaker verification evaluation trial files and as Diarization Error Rate (DER%) on diarization test sessions.
* Speaker Verification (EER%)
| Version | Model | Model Size | VoxCeleb1 (Cleaned trial file) |
|---------|--------------|-----|---------------|
| 1.10.0 | TitaNet-Large | 23M | 0.66 |
* Speaker Diarization (DER%)
| Version | Model | Model Size | Evaluation Condition | NIST SRE 2000 | AMI (Lapel) | AMI (MixHeadset) | CH109 |
|---------|--------------|-----|----------------------|---------------|-------------|------------------|-------|
| 1.10.0 | TitaNet-Large | 23M | Oracle VAD KNOWN # of Speakers | 6.73 | 2.03 | 1.73 | 1.19 |
| 1.10.0 | TitaNet-Large | 23M | Oracle VAD UNKNOWN # of Speakers | 5.38 | 2.03 | 1.89 | 1.63 |
## Limitations
This model is trained on both telephonic and non-telephonic speech from voxceleb datasets, Fisher and switch board. If your domain of data differs from trained data or doesnot show relatively good performance consider finetuning for that speech domain.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [TitaNet: Neural Model for Speaker Representation with 1D Depth-wise Separable convolutions and global context](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9746806)
[2] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
nloc2578/3.2 | 4ebb1d3c1d608db3433c57f253179e4f063353b0 | 2022-07-16T11:12:19.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | nloc2578 | null | nloc2578/3.2 | 15 | null | transformers | 9,696 | ---
tags:
- generated_from_trainer
model-index:
- name: '3.2'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3.2
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4157 | 0.3 | 1000 | 1.4088 |
| 1.3507 | 0.6 | 2000 | 1.2211 |
| 1.2083 | 0.9 | 3000 | 1.1041 |
| 0.7822 | 1.2 | 4000 | 1.1223 |
| 0.7388 | 1.5 | 5000 | 1.0472 |
| 0.7493 | 1.8 | 6000 | 0.9911 |
| 0.6247 | 2.1 | 7000 | 0.9990 |
| 0.5284 | 2.4 | 8000 | 1.0006 |
| 0.5284 | 2.7 | 9000 | 1.0066 |
| 0.525 | 2.99 | 10000 | 1.0095 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
lewiswu1209/gpt2-chinese-composition | fff00b9206eb6aad66da0356f70b58f11bff56c1 | 2022-07-17T10:52:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
]
| text-generation | false | lewiswu1209 | null | lewiswu1209/gpt2-chinese-composition | 15 | null | transformers | 9,697 | ---
license: mit
---
引自<https://github.com/yangjianxin1/CPM#model_share> |
olgaduchovny/t5-base-qa-ner-conll | 7720f6eca6e1b86199e25720240868edbda8e392 | 2022-07-18T19:10:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:conll2003",
"arxiv:2203.03903",
"transformers",
"ner",
"qa",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | olgaduchovny | null | olgaduchovny/t5-base-qa-ner-conll | 15 | null | transformers | 9,698 | ---
language:
- en
tags:
- pytorch
- ner
- qa
inference: false
license: mit
datasets:
- conll2003
metrics:
- f1
---
# t5-base-qa-ner-conll
Unofficial implementation of [InstructionNER](https://arxiv.org/pdf/2203.03903v1.pdf).
t5-base model tuned on conll2003 dataset.
https://github.com/ovbystrova/InstructionNER
## Inference
```shell
git clone https://github.com/ovbystrova/InstructionNER
cd InstructionNER
```
```python
from instruction_ner.model import Model
model = Model(
model_path_or_name="olgaduchovny/t5-base-qa-ner-conll",
tokenizer_path_or_name="olgaduchovny/t5-base-qa-ner-conll"
)
options = ["LOC", "PER", "ORG", "MISC"]
instruction = "please extract entities and their types from the input sentence, " \
"all entity types are in options"
text = "The protest , which attracted several thousand supporters , coincided with the 18th anniversary of Spain 's constitution ."
generation_kwargs = {
"num_beams": 2,
"max_length": 128
}
pred_spans = model.predict(
text=text,
generation_kwargs=generation_kwargs,
instruction=instruction,
options=options
)
>>> [(99, 104, 'LOC')]
```
## Prediction Sample
```
Sentence: The protest , which attracted several thousand supporters , coincided with the 18th anniversary of Spain 's constitution .
Instruction: please extract entities and their types from the input sentence, all entity types are in options
Options: ORG, PER, LOC
Prediction (raw text): Spain is a LOC.
Prediction (span): [(99, 104, 'LOC')]
```
|
rajpurkarlab/biobert-finetuned-change-classification | f8bae5e2cf3ea809b82183100ad6888ee59f99e4 | 2022-07-25T23:25:30.000Z | [
"pytorch",
"bert",
"text-classification",
"py",
"transformers"
]
| text-classification | false | rajpurkarlab | null | rajpurkarlab/biobert-finetuned-change-classification | 15 | 1 | transformers | 9,699 | ---
language:
- py
metrics:
- f1
---
To use our fine-tuned BioBERT model to predict whether a sentence from a radiology reports makes reference to priors, run the following:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
modelname = "rajpurkarlab/biobert-finetuned-change-classification"
tokenizer = AutoTokenizer.from_pretrained(modelname)
model = AutoModelForTokenClassification.from_pretrained(modelname)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.