modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jeevesh8/init_bert_ft_qqp-75 | a0ada8940b467a7d124a2ca4f5088fca3022d841 | 2022-06-02T12:44:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-75 | 8 | null | transformers | 13,500 | Entry not found |
Jeevesh8/init_bert_ft_qqp-76 | 84e0d05e290506700a58e3795e678c03dc621388 | 2022-06-02T12:41:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-76 | 8 | null | transformers | 13,501 | Entry not found |
Jeevesh8/init_bert_ft_qqp-78 | 2a1288c7439bae7d22f3c609776996daeba426fa | 2022-06-02T12:42:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-78 | 8 | null | transformers | 13,502 | Entry not found |
Jeevesh8/init_bert_ft_qqp-80 | a4c519ea25f9f1e5aa94282d6066a72c800fd4da | 2022-06-02T12:42:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-80 | 8 | null | transformers | 13,503 | Entry not found |
Jeevesh8/init_bert_ft_qqp-79 | ba8692a3a87b7a8eb7258a88a8b9fd9e729b1aca | 2022-06-02T12:42:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-79 | 8 | null | transformers | 13,504 | Entry not found |
Jeevesh8/init_bert_ft_qqp-77 | 6062d444b32992633b4fab7f645cdd6798281e36 | 2022-06-02T12:42:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-77 | 8 | null | transformers | 13,505 | Entry not found |
Jeevesh8/init_bert_ft_qqp-99 | 668c3092c8b34ae9df7d3a94e25b98fe18dec912 | 2022-06-02T12:43:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-99 | 8 | null | transformers | 13,506 | Entry not found |
Jeevesh8/init_bert_ft_qqp-92 | 681b119cfb091631b79d50aff29247bf855686e4 | 2022-06-02T12:41:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-92 | 8 | null | transformers | 13,507 | Entry not found |
Jeevesh8/init_bert_ft_qqp-91 | 6a86e5bd7cb3c2fb929e7ef64d12d0dd2eed323b | 2022-06-02T12:41:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-91 | 8 | null | transformers | 13,508 | Entry not found |
Jeevesh8/init_bert_ft_qqp-89 | d66f520df2d8884b3be138b13d6e760368aca9b3 | 2022-06-02T12:41:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-89 | 8 | null | transformers | 13,509 | Entry not found |
Jeevesh8/init_bert_ft_qqp-81 | afd11401892664d7470fd3f85dde2c75f63aa510 | 2022-06-02T12:41:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-81 | 8 | null | transformers | 13,510 | Entry not found |
Jeevesh8/init_bert_ft_qqp-84 | 14dc960a373ee62b6cb673df04285f49ad731539 | 2022-06-02T12:41:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-84 | 8 | null | transformers | 13,511 | Entry not found |
Jeevesh8/init_bert_ft_qqp-88 | 010a95442cb29aa8a8cfda7273084044724498ab | 2022-06-02T12:41:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-88 | 8 | null | transformers | 13,512 | Entry not found |
Jeevesh8/init_bert_ft_qqp-83 | 70628e02026e820b7cb0396ba710146fde9d75d2 | 2022-06-02T12:41:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-83 | 8 | null | transformers | 13,513 | Entry not found |
Jeevesh8/init_bert_ft_qqp-98 | 7379dc799a392be56ba4405501f480203153708f | 2022-06-02T12:41:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-98 | 8 | null | transformers | 13,514 | Entry not found |
Jeevesh8/init_bert_ft_qqp-82 | ad2210b9cfd99154786d366acf443ea3f47c56b5 | 2022-06-02T12:41:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-82 | 8 | null | transformers | 13,515 | Entry not found |
Jeevesh8/init_bert_ft_qqp-97 | 4e2647ff882d4864591183f8968687bcae6a02af | 2022-06-02T12:41:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-97 | 8 | null | transformers | 13,516 | Entry not found |
Jeevesh8/init_bert_ft_qqp-93 | 834698602a79c01208db04af60775289f9fdeba6 | 2022-06-02T12:41:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-93 | 8 | null | transformers | 13,517 | Entry not found |
Jeevesh8/init_bert_ft_qqp-87 | e32bf0a4bfd414e6d84e9164aca293022eb24b3c | 2022-06-02T12:41:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-87 | 8 | null | transformers | 13,518 | Entry not found |
Jeevesh8/init_bert_ft_qqp-94 | 4b797c8df2686d6a6e039b19a8a6a8655c7430a7 | 2022-06-02T12:41:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-94 | 8 | null | transformers | 13,519 | Entry not found |
Jeevesh8/init_bert_ft_qqp-85 | e1d5b1119599be2a84dec340df6ab84070f93053 | 2022-06-02T12:41:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-85 | 8 | null | transformers | 13,520 | Entry not found |
Jeevesh8/init_bert_ft_qqp-86 | d10100875cfb9ac67a7e63d67bb3fa3d98db7fb7 | 2022-06-02T12:41:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-86 | 8 | null | transformers | 13,521 | Entry not found |
Jeevesh8/init_bert_ft_qqp-95 | 87356dbb5664d53f73a87de105b498d7d1e5e0b2 | 2022-06-02T12:41:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-95 | 8 | null | transformers | 13,522 | Entry not found |
menbom/distilbert-base-uncased-finetuned-emotion | 000db8d5d75399dea19953fa67a7e23b0d1792fe | 2022-06-03T09:53:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | menbom | null | menbom/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,523 | Entry not found |
Jeevesh8/lecun_feather_berts-9 | dc57a1528d457b9d3b7b69bbd8e8505d0626d1ad | 2022-06-04T06:52:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-9 | 8 | null | transformers | 13,524 | Entry not found |
Jeevesh8/lecun_feather_berts-21 | 462d568601e833d481578ad0c293cefb19120d4f | 2022-06-04T06:52:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/lecun_feather_berts-21 | 8 | null | transformers | 13,525 | Entry not found |
ITESM/sentece-embeddings-BETO | 59076e57faab03b52224cbadf6c9d8d3d4ced220 | 2022-06-05T05:05:05.000Z | [
"pytorch",
"bert",
"feature-extraction",
"dataset:stackexchange_xml",
"dataset:code_search_net",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | ITESM | null | ITESM/sentece-embeddings-BETO | 8 | null | sentence-transformers | 13,526 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- stackexchange_xml
- code_search_net
---
# ITESM/sentece-embeddings-BETO
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ITESM/sentece-embeddings-BETO')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ITESM/sentece-embeddings-BETO')
model = AutoModel.from_pretrained('ITESM/sentece-embeddings-BETO')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ITESM/sentece-embeddings-BETO)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 16 with parameters:
```
{'batch_size': 100}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
clhuang/albert-sentiment | 844af8efc9d031e01ca201becbc55922e1222e38 | 2022-06-07T09:11:08.000Z | [
"pytorch",
"bert",
"text-classification",
"tw",
"transformers",
"albert",
"classification",
"license:afl-3.0"
]
| text-classification | false | clhuang | null | clhuang/albert-sentiment | 8 | null | transformers | 13,527 | ---
language:
- tw
tags:
- albert
- classification
license: afl-3.0
metrics:
- Accuracy
---
# 繁體中文情緒分類: 負面(0)、正面(1)
依據ckiplab/albert預訓練模型微調,訓練資料集只有8萬筆,做為課程的範例模型。
# 使用範例:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("clhuang/albert-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("clhuang/albert-sentiment")
## Pediction
target_names=['Negative','Positive']
max_length = 200 # 最多字數 若超出模型訓練時的字數,以模型最大字數為依據
def get_sentiment_proba(text):
# prepare our text into tokenized sequence
inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt")
# perform inference to our model
outputs = model(**inputs)
# get output probabilities by doing softmax
probs = outputs[0].softmax(1)
response = {'Negative': round(float(probs[0, 0]), 2), 'Positive': round(float(probs[0, 1]), 2)}
# executing argmax function to get the candidate label
#return probs.argmax()
return response
get_sentiment_proba('我喜歡這本書')
get_sentiment_proba('不喜歡這款產品') |
anlausch/aq_bert_ibm | b6fae62660e27cce8b180b13012e7e67b86d6de8 | 2022-06-06T08:10:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | anlausch | null | anlausch/aq_bert_ibm | 8 | null | transformers | 13,528 | ---
license: mit
---
Model trained on IBMArgRank30k for 2 epochs with a learning rate of 3e-5 (optimised via grid search) in a similar way as in Lauscher et al. 2020 (see below). The original model was Tensorflow-based. This model corresponds to a reimplementation with Transformers & PyTorch.
```
@inproceedings{lauscher-etal-2020-rhetoric,
title = "Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing",
author = "Lauscher, Anne and
Ng, Lily and
Napoles, Courtney and
Tetreault, Joel",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.402",
doi = "10.18653/v1/2020.coling-main.402",
pages = "4563--4574",
abstract = "Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q{\&}A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.",
}
``` |
miyagawaorj/distilbert-base-uncased-distilled-clinc | 172cff6eebb15590363a2e7d384771596327b957 | 2022-06-06T18:42:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | miyagawaorj | null | miyagawaorj/distilbert-base-uncased-distilled-clinc | 8 | null | transformers | 13,529 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9506451612903226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2466
- Accuracy: 0.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9383 | 1.0 | 954 | 1.4511 | 0.8397 |
| 0.8485 | 2.0 | 1908 | 0.4733 | 0.9255 |
| 0.2822 | 3.0 | 2862 | 0.3070 | 0.9429 |
| 0.1515 | 4.0 | 3816 | 0.2664 | 0.9490 |
| 0.106 | 5.0 | 4770 | 0.2641 | 0.95 |
| 0.0874 | 6.0 | 5724 | 0.2536 | 0.9510 |
| 0.0764 | 7.0 | 6678 | 0.2475 | 0.9506 |
| 0.0718 | 8.0 | 7632 | 0.2450 | 0.9513 |
| 0.068 | 9.0 | 8586 | 0.2473 | 0.9497 |
| 0.0664 | 10.0 | 9540 | 0.2466 | 0.9506 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
suonbo/bert-finetuned-ner | 7340d5af30baf0e5002597558eacee03f7685e38 | 2022-06-07T07:24:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | suonbo | null | suonbo/bert-finetuned-ner | 8 | null | transformers | 13,530 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9335982778605729
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9411568316501127
- name: Accuracy
type: accuracy
value: 0.9854447518690763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Precision: 0.9336
- Recall: 0.9488
- F1: 0.9412
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0897 | 1.0 | 1756 | 0.0648 | 0.9152 | 0.9408 | 0.9278 | 0.9837 |
| 0.0384 | 2.0 | 3512 | 0.0601 | 0.9277 | 0.9507 | 0.9391 | 0.9859 |
| 0.0201 | 3.0 | 5268 | 0.0637 | 0.9336 | 0.9488 | 0.9412 | 0.9854 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ziq/depression_suggestion | aa16c86702944c32561c2bd2c37d25819087909e | 2022-06-07T07:18:44.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | ziq | null | ziq/depression_suggestion | 8 | null | transformers | 13,531 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: depression_suggestion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depression_suggestion
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 60.7965 |
| No log | 2.0 | 6 | 60.5778 |
| No log | 3.0 | 9 | 60.1954 |
| No log | 4.0 | 12 | 59.6487 |
| No log | 5.0 | 15 | 58.9372 |
| No log | 6.0 | 18 | 58.0582 |
| No log | 7.0 | 21 | 57.0106 |
| No log | 8.0 | 24 | 55.7910 |
| No log | 9.0 | 27 | 54.3934 |
| No log | 10.0 | 30 | 52.8099 |
| No log | 11.0 | 33 | 51.0219 |
| No log | 12.0 | 36 | 49.0127 |
| No log | 13.0 | 39 | 46.7522 |
| No log | 14.0 | 42 | 44.2033 |
| No log | 15.0 | 45 | 41.3146 |
| No log | 16.0 | 48 | 37.9982 |
| No log | 17.0 | 51 | 34.2236 |
| No log | 18.0 | 54 | 29.8068 |
| No log | 19.0 | 57 | 24.9750 |
| No log | 20.0 | 60 | 20.0707 |
| No log | 21.0 | 63 | 15.5166 |
| No log | 22.0 | 66 | 12.0328 |
| No log | 23.0 | 69 | 9.1012 |
| No log | 24.0 | 72 | 7.2116 |
| No log | 25.0 | 75 | 6.3149 |
| No log | 26.0 | 78 | 5.8127 |
| No log | 27.0 | 81 | 5.4548 |
| No log | 28.0 | 84 | 5.1684 |
| No log | 29.0 | 87 | 4.8927 |
| No log | 30.0 | 90 | 4.6128 |
| No log | 31.0 | 93 | 4.3782 |
| No log | 32.0 | 96 | 4.1996 |
| No log | 33.0 | 99 | 4.0981 |
| No log | 34.0 | 102 | 4.0022 |
| No log | 35.0 | 105 | 3.9224 |
| No log | 36.0 | 108 | 3.8381 |
| No log | 37.0 | 111 | 3.7660 |
| No log | 38.0 | 114 | 3.6887 |
| No log | 39.0 | 117 | 3.6483 |
| No log | 40.0 | 120 | 3.6020 |
| No log | 41.0 | 123 | 3.5590 |
| No log | 42.0 | 126 | 3.5199 |
| No log | 43.0 | 129 | 3.4646 |
| No log | 44.0 | 132 | 3.4098 |
| No log | 45.0 | 135 | 3.3684 |
| No log | 46.0 | 138 | 3.3290 |
| No log | 47.0 | 141 | 3.3113 |
| No log | 48.0 | 144 | 3.3033 |
| No log | 49.0 | 147 | 3.2928 |
| No log | 50.0 | 150 | 3.2776 |
| No log | 51.0 | 153 | 3.2587 |
| No log | 52.0 | 156 | 3.2487 |
| No log | 53.0 | 159 | 3.2390 |
| No log | 54.0 | 162 | 3.2318 |
| No log | 55.0 | 165 | 3.2311 |
| No log | 56.0 | 168 | 3.2377 |
| No log | 57.0 | 171 | 3.2554 |
| No log | 58.0 | 174 | 3.2720 |
| No log | 59.0 | 177 | 3.2781 |
| No log | 60.0 | 180 | 3.2882 |
| No log | 61.0 | 183 | 3.3089 |
| No log | 62.0 | 186 | 3.3352 |
| No log | 63.0 | 189 | 3.3519 |
| No log | 64.0 | 192 | 3.3233 |
| No log | 65.0 | 195 | 3.3028 |
| No log | 66.0 | 198 | 3.3153 |
| No log | 67.0 | 201 | 3.3422 |
| No log | 68.0 | 204 | 3.3753 |
| No log | 69.0 | 207 | 3.4003 |
| No log | 70.0 | 210 | 3.3740 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RogerKam/roberta_fine_tuned_sentiment_financial_news | dd23f912e7c3977d38aa981a9e678ad577863c9e | 2022-06-07T11:25:35.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | RogerKam | null | RogerKam/roberta_fine_tuned_sentiment_financial_news | 8 | null | transformers | 13,532 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_fine_tuned_sentiment_financial_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_fine_tuned_sentiment_financial_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6362
- Accuracy: 0.8826
- F1 Score: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.0+cu111
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kevincstowe/concept2seq-cefr | 41274c9a9c20d078a84a15dad98badc03ea2326f | 2022-06-08T13:30:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | kevincstowe | null | kevincstowe/concept2seq-cefr | 8 | null | transformers | 13,533 | Entry not found |
aspis/swin-base-finetuned-snacks | 07ee36696cdaa896c28e1dac2686037adbda2e1e | 2022-06-08T18:43:00.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:snacks",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | aspis | null | aspis/swin-base-finetuned-snacks | 8 | null | transformers | 13,534 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- snacks
metrics:
- accuracy
model-index:
- name: swin-base-finetuned-snacks
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: snacks
type: snacks
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9455497382198953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-finetuned-snacks
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the snacks dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2404
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0044 | 1.0 | 38 | 0.2981 | 0.9309 |
| 0.0023 | 2.0 | 76 | 0.2287 | 0.9445 |
| 0.0012 | 3.0 | 114 | 0.2404 | 0.9455 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RomanCast/camembert-miam-loria-finetuned | 41f60ae6b81e3cc8a7050018720de1bf822663c3 | 2022-06-14T21:35:49.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers"
]
| text-classification | false | RomanCast | null | RomanCast/camembert-miam-loria-finetuned | 8 | null | transformers | 13,535 | ---
language:
- fr
--- |
Peltarion/dnabert-minilm-mini | 22478613bea9563b77b6a4168300731eb58f9341 | 2022-07-02T11:29:19.000Z | [
"pytorch",
"bert",
"transformers",
"DNA",
"license:mit"
]
| null | false | Peltarion | null | Peltarion/dnabert-minilm-mini | 8 | null | transformers | 13,536 | ---
tags:
- DNA
license: mit
---
## MiniDNA mini model
This is a distilled version of [DNABERT](https://github.com/jerryji1993/DNABERT) by using MiniLM technique. It has a BERT architecture with 3 layers and 384 hidden units, pre-trained on 6-mer DNA sequences. For more details on the pre-training scheme and methods, please check the original [thesis report](http://www.diva-portal.org/smash/record.jsf?dswid=846&pid=diva2%3A1676068&c=1&searchType=SIMPLE&language=en&query=joana+palés&af=%5B%5D&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)..
## How to Use
The model can be used to fine-tune on a downstream genomic task, e.g. promoter identification.
```python
import torch
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('Peltarion/dnabert-minilm-mini')
```
More details on how to fine-tune the model, dataset and additional source codes are available on [github.com/joanaapa/Distillation-DNABERT-Promoter](https://github.com/joanaapa/Distillation-DNABERT-Promoter). |
nboudad/Maghribert | 496e394fd2fd1f11d2795f14644a92ae7456ccc6 | 2022-06-14T09:27:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | nboudad | null | nboudad/Maghribert | 8 | null | transformers | 13,537 | ---
widget:
- text: "جاب ليا [MASK] ."
example_title: "example1"
- text: "مشيت نجيب [MASK] فالفرماسيان ."
example_title: "example2"
--- |
speechbrain/asr-wav2vec2-dvoice-swahili | 6a20f71cb41407664d7e6bdf315400cd4cefd7e1 | 2022-06-10T00:57:21.000Z | [
"wav2vec2",
"feature-extraction",
"sw",
"dataset:Dvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
]
| automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-dvoice-swahili | 8 | null | speechbrain | 13,538 | ---
language: "sw"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- Dvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Swahili (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [DVoice-VoxLingua107](https://zenodo.org/record/6342622) Swahili dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 8.83 | 22.78 | 9.46 | 23.16 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and is trained with the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install transformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Swahili)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-dvoice-swahili", savedir="pretrained_models/asr-wav2vec2-dvoice-swahili")
asr_model.transcribe_file('speechbrain/asr-wav2vec2-dvoice-swahili/example_swahili.wav')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/DVoice/ASR/CTC
python train_with_wav2vec2.py hparams/train_sw_with_wav2vec.yaml --data_folder=/localscratch/dvoice_recipe_data/
```
Please, read the README.md carefully before running the experiment to make sure the dataset is structured as expected.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1vNT7RjRuELs7pumBHmfYsrOp9m46D0ym?usp=sharing).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# About DVoice
DVoice is a community initiative that aims to provide African low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrieved from social media. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola, and Soninke.
For this project, AIOX Labs and the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London, and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes, or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business-ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems, and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network, and System Security, and Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution.
|
nmcahill/mtbi-classifier | 20919acd120e18b9b6d80756b6490732ef8f0ad4 | 2022-06-09T22:00:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | nmcahill | null | nmcahill/mtbi-classifier | 8 | null | transformers | 13,539 | Entry not found |
huggingtweets/tonebot_ | 24005f3cf1334daaebaf48607714803ff3b479ae | 2022-06-11T00:15:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/tonebot_ | 8 | null | transformers | 13,540 | ---
language: en
thumbnail: http://www.huggingtweets.com/tonebot_/1654906535396/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447253318380793858/VVNhWBGI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tone bot</div>
<div style="text-align: center; font-size: 14px;">@tonebot_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tone bot.
| Data | tone bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 537 |
| Tweets kept | 2713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ot29sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tonebot_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g614pb8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g614pb8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tonebot_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ahmeddbahaa/mt5-base-finetune-ar-xlsum | 2d1694c8bafb98d10acae50e28d99266cd897974 | 2022-06-12T13:55:10.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"xlsum",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/mt5-base-finetune-ar-xlsum | 8 | null | transformers | 13,541 | ---
license: apache-2.0
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetune-ar-xlsum
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2546
- Rouge-1: 22.2
- Rouge-2: 9.57
- Rouge-l: 20.26
- Gen Len: 19.0
- Bertscore: 71.43
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.9261 | 1.0 | 585 | 3.6314 | 18.19 | 6.49 | 16.37 | 19.0 | 70.17 |
| 3.8429 | 2.0 | 1170 | 3.4253 | 19.45 | 7.58 | 17.73 | 19.0 | 70.35 |
| 3.6311 | 3.0 | 1755 | 3.3569 | 20.83 | 8.54 | 18.9 | 19.0 | 70.89 |
| 3.4917 | 4.0 | 2340 | 3.3101 | 20.77 | 8.53 | 18.89 | 19.0 | 70.98 |
| 3.3873 | 5.0 | 2925 | 3.2867 | 21.47 | 9.0 | 19.54 | 19.0 | 71.23 |
| 3.3037 | 6.0 | 3510 | 3.2693 | 21.41 | 9.0 | 19.5 | 19.0 | 71.21 |
| 3.2357 | 7.0 | 4095 | 3.2581 | 22.05 | 9.36 | 20.04 | 19.0 | 71.43 |
| 3.1798 | 8.0 | 4680 | 3.2522 | 22.21 | 9.56 | 20.23 | 19.0 | 71.41 |
| 3.1359 | 9.0 | 5265 | 3.2546 | 22.27 | 9.58 | 20.23 | 19.0 | 71.46 |
| 3.0997 | 10.0 | 5850 | 3.2546 | 22.2 | 9.57 | 20.26 | 19.0 | 71.43 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ilhami/Tr_En_AcademicTranslation | ec914b5c8feaf5ff7b89c2d51e4a8e17c8430fee | 2022-06-12T19:05:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"en",
"dataset:Parallel Corpora for Turkish-English Academic Translations",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | ilhami | null | ilhami/Tr_En_AcademicTranslation | 8 | null | transformers | 13,542 | ---
language:
- tr
- en
tags:
- translation
license: apache-2.0
datasets:
- Parallel Corpora for Turkish-English Academic Translations
metrics:
- bleu
- sacrebleu
---
## Model Details
- **Developed by:** İlhami SEL
- **Model type:** Turkish-English Machine Translation -- Transformer Based(6 Layer)
- **Language:** Turkish - English
- **Resources for more information:** Sel, İ. , Üzen, H. & Hanbay, D. (2021). Creating a Parallel Corpora for Turkish-English Academic Translations . Computer Science , 5th International Artificial Intelligence and Data Processing symposium , 335-340 . DOI: 10.53070/bbd.990959
```python
checkpoint = "ilhami/Tr_En_AcademicTranslation"
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to("cuda")
tr= ["Sohbet robotları son yıllarda yaygın bir şekilde kullanılmaya başlanmıştır. ",
"İnsanları taklit eden ve daha iyi müşteri memnuniyeti sağlayan sohbet robotları en gelişkin doğal dil işleme tekniklerine ihtiyaç duymaktadır. ",
"Bu çalışma sohbet robotu konuşmalarının niyet tahminini geliştirmeye odaklanmıştır." ,
"Kelime gösterimi için TF-IDF, Doc2vec ve BERT gibi geleneksel ve gelişmiş doğal dil işleme yöntemleri, çoklu sınıf ve çoklu etiket tahmini için ise lojistik regresyon, rastgele orman ve yapay sinir ağları kullanılmıştır." ,
"Sohbet robotu konuşma veri kümeleri, sinema bileti rezervasyonu, restoran rezervasyonu ve taksi çağırma olmak üzere üç farklı alandan alınmıştır. ",
"Bu çalışmanın sonunda, BERT ve BERT ile TF-IDF birleşimi modellerin diğer kombinasyonlardan daha iyi sonuç verdiği görülmüştür. ",
"BERT gibi ön eğitimli modellerden faydalanmanın daha iyi bağlamsal anlama sağladığı ortaya çıkmıştır. ",
"TF-IDF yerleştirmeleri, BERT gösterimi ile birleştirilerek niyet kategorisi tahmininin iyileştirilmesi amaçlanmıştır."]
encoded_text = tokenizer(tr, return_tensors="pt", padding = True).to("cuda")
generated_tokens = model.generate(**encoded_text)
en = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
|
c17hawke/first-model | 99b891c42e95902a3ed013eebbcc41a9ffa6397a | 2022-06-13T14:02:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | c17hawke | null | c17hawke/first-model | 8 | null | transformers | 13,543 | # First model |
ahmeddbahaa/AraT5-base-finetune-ar-xlsum | 2b7822f8219d175196c7d8db2508f20d76d5b292 | 2022-06-13T15:46:47.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"Arat5-base",
"abstractive summarization",
"ar",
"xlsum",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/AraT5-base-finetune-ar-xlsum | 8 | null | transformers | 13,544 | ---
tags:
- summarization
- Arat5-base
- abstractive summarization
- ar
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: AraT5-base-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraT5-base-finetune-ar-xlsum
This model is a fine-tuned version of [UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4714
- Rouge-1: 29.55
- Rouge-2: 12.63
- Rouge-l: 25.8
- Gen Len: 18.76
- Bertscore: 73.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 11.9753 | 1.0 | 293 | 7.0887 | 11.93 | 2.56 | 10.93 | 17.19 | 63.85 |
| 6.7818 | 2.0 | 586 | 5.7712 | 19.94 | 6.34 | 17.65 | 18.64 | 69.0 |
| 5.9434 | 3.0 | 879 | 5.1083 | 23.51 | 8.56 | 20.66 | 18.88 | 70.78 |
| 5.451 | 4.0 | 1172 | 4.8538 | 25.84 | 10.05 | 22.63 | 18.42 | 72.04 |
| 5.1643 | 5.0 | 1465 | 4.6910 | 27.23 | 11.13 | 23.83 | 18.78 | 72.45 |
| 4.9693 | 6.0 | 1758 | 4.5950 | 28.42 | 11.71 | 24.82 | 18.74 | 72.94 |
| 4.8308 | 7.0 | 2051 | 4.5323 | 28.95 | 12.19 | 25.3 | 18.74 | 73.13 |
| 4.7284 | 8.0 | 2344 | 4.4956 | 29.19 | 12.37 | 25.53 | 18.76 | 73.18 |
| 4.653 | 9.0 | 2637 | 4.4757 | 29.44 | 12.48 | 25.63 | 18.78 | 73.23 |
| 4.606 | 10.0 | 2930 | 4.4714 | 29.55 | 12.63 | 25.8 | 18.76 | 73.3 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
course5i/SEAD-L-6_H-256_A-8-wnli | e1e67ce62a98cb06953f006c8bf421d4820f9646 | 2022-06-12T23:05:39.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:wnli",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
]
| text-classification | false | course5i | null | course5i/SEAD-L-6_H-256_A-8-wnli | 8 | null | transformers | 13,545 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- wnli
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-256_A-8-wnli
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **wnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.5634 | 1.2474 | 56.919 | 2.405 | 0.6859 | 71 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
binay1999/distilbert-cybertexts-text-classification | d67f6896bf4a4496ee27848b347d9ac344723d9f | 2022-06-13T08:09:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | binay1999 | null | binay1999/distilbert-cybertexts-text-classification | 8 | null | transformers | 13,546 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-cybertexts-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-cybertexts-text-classification
This model is a fine-tuned version of [binay1999/distilbert-cybertexts-preprocessed](https://huggingface.co/binay1999/distilbert-cybertexts-preprocessed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1374 | 1.0 | 1000 | 0.1215 |
| 0.0769 | 2.0 | 2000 | 0.0959 |
| 0.039 | 3.0 | 3000 | 0.1104 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sampras343/wav2vec2-keyword-spotting-int8 | 794ac9baf8839daf6e4be2ac319faa57604b6277 | 2022-06-13T09:32:43.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | sampras343 | null | sampras343/wav2vec2-keyword-spotting-int8 | 8 | null | transformers | 13,547 | [anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) model quantized with [Optimum OpenVINO](https://github.com/dkurt/optimum-openvino/).
| Accuracy on eval (baseline) | Accuracy on eval (quantized) |
|-----------------------------|----------------------------------------|
| 0.9828 | 0.9553 (-0.0274) |
|
Alireza1044/mobilebert_cola | 6ff2be205729195ac1817d7bdbb93716078e97c6 | 2022-06-14T09:02:15.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_cola | 8 | null | transformers | 13,548 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5277813760438573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cola
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6337
- Matthews Correlation: 0.5278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kpeyton/distilbert-base-uncased-finetuned-atuscol | 1fd4092c3e5754607e3bf081febbaade1a44a5c0 | 2022-06-20T10:28:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | kpeyton | null | kpeyton/distilbert-base-uncased-finetuned-atuscol | 8 | null | transformers | 13,549 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-atuscol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-atuscol
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 2.4169 |
| No log | 2.0 | 130 | 1.0977 |
| No log | 3.0 | 195 | 0.8621 |
| No log | 4.0 | 260 | 0.6932 |
| No log | 5.0 | 325 | 0.6200 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-Bert-base-uncased-newdata | bf014d357894ff205db3ebf79c3191cbe627ddfa | 2022-06-17T07:56:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jkhan447 | null | jkhan447/sarcasm-detection-Bert-base-uncased-newdata | 8 | null | transformers | 13,550 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased-newdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-newdata
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5383
- Accuracy: 0.7766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Makabaka/bert-base-uncased-EnglishLawAI | 7ba3fd35e474af7c65b209a3029120d18477fb1c | 2022-06-15T17:56:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Makabaka | null | Makabaka/bert-base-uncased-EnglishLawAI | 8 | null | transformers | 13,551 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5225 | 1.0 | 670 | 2.4071 |
| 2.2459 | 2.0 | 1340 | 2.0490 |
| 2.1137 | 3.0 | 2010 | 2.1236 |
| 2.0192 | 4.0 | 2680 | 2.0374 |
| 1.9307 | 5.0 | 3350 | 1.9619 |
| 1.8619 | 6.0 | 4020 | 1.9072 |
| 1.823 | 7.0 | 4690 | 1.8499 |
| 1.7415 | 8.0 | 5360 | 1.7408 |
| 1.6994 | 9.0 | 6030 | 1.7243 |
| 1.6576 | 10.0 | 6700 | 1.7139 |
| 1.6109 | 11.0 | 7370 | 1.8658 |
| 1.593 | 12.0 | 8040 | 1.9678 |
| 1.5501 | 13.0 | 8710 | 1.7578 |
| 1.5288 | 14.0 | 9380 | 1.7830 |
| 1.5135 | 15.0 | 10050 | 1.8932 |
| 1.4906 | 16.0 | 10720 | 1.6174 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.1
- Tokenizers 0.12.1
|
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE | af8d38679f9452ad203f2be60e5745c7f658061d | 2022-06-15T23:52:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Willy | null | Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE | 8 | null | transformers | 13,552 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6260
- Accuracy: 0.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6052 | 1.0 | 9 | 0.6370 | 0.7015 |
| 0.5501 | 2.0 | 18 | 0.6260 | 0.7015 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jhliu/ClinicalNoteBERT-base-uncased-MIMIC-segment-note | 3923155620a7bc3fe0a4a034ea4ea4f7ea621973 | 2022-06-16T05:17:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | jhliu | null | jhliu/ClinicalNoteBERT-base-uncased-MIMIC-segment-note | 8 | null | transformers | 13,553 | Entry not found |
waboucay/camembert-base-finetuned-repnum_wl_3_classes | 67db19a015847ac77d558c98d232d9c753633647 | 2022-06-16T07:42:03.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
]
| text-classification | false | waboucay | null | waboucay/camembert-base-finetuned-repnum_wl_3_classes | 8 | null | transformers | 13,554 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 74.5 | 74.5 |
| test | 74.9 | 74.8 |
|
income/bpr-gpl-bioasq-base-msmarco-distilbert-tas-b | 8d3baa64b96a1bc262acf1d94ec04946d056f598 | 2022-06-16T18:26:16.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | income | null | income/bpr-gpl-bioasq-base-msmarco-distilbert-tas-b | 8 | null | sentence-transformers | 13,555 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 92924 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
S2312dal/M6_cross | 6242a1fedda3dbd6fbdbaa01a4cc2e2d113fd890 | 2022-06-18T14:10:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | S2312dal | null | S2312dal/M6_cross | 8 | null | transformers | 13,556 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M6_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M6_cross
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0084
- Pearson: 0.9811
- Spearmanr: 0.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6.0
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0059 | 1.0 | 105 | 0.0158 | 0.9633 | 0.9054 |
| 0.001 | 2.0 | 210 | 0.0102 | 0.9770 | 0.9103 |
| 0.0008 | 3.0 | 315 | 0.0083 | 0.9805 | 0.9052 |
| 0.0011 | 4.0 | 420 | 0.0075 | 0.9812 | 0.9082 |
| 0.0017 | 5.0 | 525 | 0.0084 | 0.9811 | 0.9075 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BM-K/KoMiniLM-68M | 68692fc3cb6c472e9ca4850a1c62b66e873a2616 | 2022-06-23T12:00:07.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2002.10957",
"transformers"
]
| text-classification | false | BM-K | null | BM-K/KoMiniLM-68M | 8 | 1 | transformers | 13,557 | # KoMiniLM
🐣 Korean mini language model
## Overview
Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models.
## Quick tour
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM-68M") # 68M model
model = AutoModel.from_pretrained("BM-K/KoMiniLM-68M")
inputs = tokenizer("안녕 세상아!", return_tensors="pt")
outputs = model(**inputs)
```
## Update history
** Updates on 2022.06.20 **
- Release KoMiniLM-bert-68M
** Updates on 2022.05.24 **
- Release KoMiniLM-bert-23M
## Pre-training
`Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE)
### Object
Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]](https://arxiv.org/abs/2002.10957) were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project.
### Data sets
|Data|News comments|News article|
|:----:|:----:|:----:|
|size|10G|10G|
### Config
- **KoMiniLM-68M**
```json
{
"architectures": [
"BertForPreTraining"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"output_attentions": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"return_dict": false,
"torch_dtype": "float32",
"transformers_version": "4.13.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 32000
}
```
### Performance on subtasks
- The results of our fine-tuning experiments are an average of 3 runs for each task.
```
cd KoMiniLM-Finetune
bash scripts/run_all_kominilm.sh
```
|| #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 |
|KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 |
|KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 |
|DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 |
| | | | | | | | | |
|**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 |
|**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 |
- [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus)
- [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018)
- [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding)
- [Question Pair](https://github.com/songys/Question_pair) (Paired Question)
- [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset)
<img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br>
### User Contributed Examples
-
## Reference
- [KLUE BERT](https://github.com/KLUE-benchmark/KLUE)
- [KcBERT](https://github.com/Beomi/KcBERT)
- [SKT KoBERT](https://github.com/SKTBrain/KoBERT)
- [DistilKoBERT](https://github.com/monologg/DistilKoBERT)
- [lassl](https://github.com/lassl/lassl) |
Hardeep/distilbert-base-uncased-finetuned-emotion-01 | b50e4c7d1b5b17fe91415ef95605614d6eb0d864 | 2022-06-19T09:16:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Hardeep | null | Hardeep/distilbert-base-uncased-finetuned-emotion-01 | 8 | null | transformers | 13,558 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_mt5lephone-small_textdecoderonly_bs64 | 0af345f14ebf0a75e3633b12bdf27c1afcfda5f2 | 2022-06-21T06:37:00.000Z | [
"pytorch",
"speechmix",
"transformers"
]
| null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_mt5lephone-small_textdecoderonly_bs64 | 8 | null | transformers | 13,559 | Entry not found |
huggingtweets/alpha_convert | 0bf8ac8d7be2f9e4dbcb8a116d5773a854e7b6cd | 2022-06-20T03:39:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/alpha_convert | 8 | null | transformers | 13,560 | ---
language: en
thumbnail: http://www.huggingtweets.com/alpha_convert/1655696345558/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510046460556980225/LEbmoGEz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joe Cutler</div>
<div style="text-align: center; font-size: 14px;">@alpha_convert</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joe Cutler.
| Data | Joe Cutler |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 300 |
| Short tweets | 435 |
| Tweets kept | 2511 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2p03ahbk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alpha_convert's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37xwt5py) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37xwt5py/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alpha_convert')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
joshanashakya/codebert_sourcecode_nmt_pn2ja_100E_2e-05LR_16B_6E_6D | 3f39fd76b1462015949fea6f23324662ba8a0556 | 2022-06-20T03:50:11.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | joshanashakya | null | joshanashakya/codebert_sourcecode_nmt_pn2ja_100E_2e-05LR_16B_6E_6D | 8 | null | transformers | 13,561 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-28 | 1ab65358de5893f5c4bb7881af2d7619bd8c8caa | 2022-06-21T13:28:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-28 | 8 | null | transformers | 13,562 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-33 | 68789ba636027585e434666a5d20ffa8131bb13e | 2022-06-21T13:27:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-33 | 8 | null | transformers | 13,563 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-56 | ff59a6c90f3641c087b15d5db39bd42595ab2d68 | 2022-06-21T13:28:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-56 | 8 | null | transformers | 13,564 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-30 | 476d571dd5cd3455836c40c89422c1c0557fd07c | 2022-06-21T13:27:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-30 | 8 | null | transformers | 13,565 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-5 | 4419a18f6ec50e051dfea0cfa6f5b48f3065dbb3 | 2022-06-21T13:28:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-5 | 8 | null | transformers | 13,566 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-22 | 56034a76d6a9ecaf66b25583c30e88b6a49e1dac | 2022-06-21T13:27:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-22 | 8 | null | transformers | 13,567 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-37 | a3409aafe3fd2bcca578e3208d29d5b7aee561a4 | 2022-06-21T13:27:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-37 | 8 | null | transformers | 13,568 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-12 | f3605f52cf2e877927947ac56dea2f7dc2aa42cf | 2022-06-21T13:28:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-12 | 8 | null | transformers | 13,569 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-36 | 2f70413f7da1ed794261a154b5417eb5b465d404 | 2022-06-21T13:27:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-36 | 8 | null | transformers | 13,570 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-4 | ee81845271584d37f59a95bed909e82e9dcd729c | 2022-06-21T13:28:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-4 | 8 | null | transformers | 13,571 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-20 | 5ffca570e72bbef89715d10b151db60644adea35 | 2022-06-21T13:28:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-20 | 8 | null | transformers | 13,572 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-58 | 76286d01ee45d4819e5cb3368010fda185e53047 | 2022-06-21T13:30:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-58 | 8 | null | transformers | 13,573 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-41 | 01f975a0dc9d9314254cac7cb696d6c5317e9e23 | 2022-06-21T13:28:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-41 | 8 | null | transformers | 13,574 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-24 | 76019c51f1b7ed8df08b167f5ed1ea24bbf79050 | 2022-06-21T13:28:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-24 | 8 | null | transformers | 13,575 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-49 | b231efc77898a3fde0f1e12fe84ca3f411d12539 | 2022-06-21T13:33:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-49 | 8 | null | transformers | 13,576 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-50 | 61d5499d6a05d5d4b779be056ac377be34094ee0 | 2022-06-21T13:28:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-50 | 8 | null | transformers | 13,577 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-60 | 7fba36f981842f17b96487fc21e92457974179d0 | 2022-06-21T13:30:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-60 | 8 | null | transformers | 13,578 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-1 | 559e7cbd271bdc614c620fc3f4642f4975401a25 | 2022-06-21T13:32:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-1 | 8 | null | transformers | 13,579 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-51 | 1d39933f516978ae3181282e13a6a8a0606a3727 | 2022-06-21T13:28:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-51 | 8 | null | transformers | 13,580 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-14 | c25154d7928fb4e4b1845de1f38f175a69be5821 | 2022-06-21T13:28:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-14 | 8 | null | transformers | 13,581 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-54 | ff1db086166fddbeaf98b1b2894152437c6aa6fd | 2022-06-21T13:28:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-54 | 8 | null | transformers | 13,582 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-16 | 96101ebf83e98a4ca3150010197f858e306b0099 | 2022-06-21T13:28:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-16 | 8 | null | transformers | 13,583 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-25 | a0ac7b807c91be59b4c7a5b2c71ba0bc6d3c68bd | 2022-06-21T13:28:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-25 | 8 | null | transformers | 13,584 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-53 | 60a94d02159471211319862a7c16acc9b96243e0 | 2022-06-21T13:28:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-53 | 8 | null | transformers | 13,585 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-15 | 47d61d6fabe051ce3f48985cd92e83da47221fbe | 2022-06-21T13:28:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-15 | 8 | null | transformers | 13,586 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-3 | 5c0712091aa0e6ee24f7279c0f60c16cf9ff4444 | 2022-06-21T13:30:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-3 | 8 | null | transformers | 13,587 | Entry not found |
paola-md/recipe-tis | 9d1d95595001c4d645e9cf44b3afcd6efde69bb1 | 2022-06-21T14:51:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | paola-md | null | paola-md/recipe-tis | 8 | null | transformers | 13,588 | Entry not found |
mmillet/distilrubert_tiny-2nd-finetune-epru | 5f7dbf6c2b67ef630fd43efeadb02294a505ea70 | 2022-06-21T14:58:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | mmillet | null | mmillet/distilrubert_tiny-2nd-finetune-epru | 8 | null | transformers | 13,589 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert_tiny-2nd-finetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert_tiny-2nd-finetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4467
- Accuracy: 0.8712
- F1: 0.8718
- Precision: 0.8867
- Recall: 0.8712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4947 | 1.0 | 12 | 0.4142 | 0.8773 | 0.8777 | 0.8907 | 0.8773 |
| 0.2614 | 2.0 | 24 | 0.3178 | 0.9018 | 0.9011 | 0.9069 | 0.9018 |
| 0.2079 | 3.0 | 36 | 0.3234 | 0.8773 | 0.8784 | 0.8850 | 0.8773 |
| 0.1545 | 4.0 | 48 | 0.3729 | 0.8834 | 0.8830 | 0.8946 | 0.8834 |
| 0.1028 | 5.0 | 60 | 0.2964 | 0.9018 | 0.9016 | 0.9073 | 0.9018 |
| 0.0986 | 6.0 | 72 | 0.2971 | 0.9141 | 0.9139 | 0.9152 | 0.9141 |
| 0.0561 | 7.0 | 84 | 0.3482 | 0.8957 | 0.8962 | 0.9023 | 0.8957 |
| 0.0336 | 8.0 | 96 | 0.3731 | 0.8957 | 0.8953 | 0.9014 | 0.8957 |
| 0.0364 | 9.0 | 108 | 0.4467 | 0.8712 | 0.8718 | 0.8867 | 0.8712 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sayan01/tiny-bert-rte-distilled | 50f27f7eef078efdde174a3f94d981d4649961f6 | 2022-06-30T16:03:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sayan01 | null | Sayan01/tiny-bert-rte-distilled | 8 | null | transformers | 13,590 | Entry not found |
kenobi/SDO_VT1 | cf8994669127c049ccecb1fd42bcdd60eb3a7fa6 | 2022-06-22T18:40:36.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"arxiv:2006.03677",
"transformers",
"model-index"
]
| image-classification | false | kenobi | null | kenobi/SDO_VT1 | 8 | null | transformers | 13,591 | ---
tags:
- image-classification
- pytorch
metrics:
- accuracy
model-index:
- name: SDO_VT1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8695651888847351
---
# NASA Solar Dynamics Observatory Vision Transformer v.1 (SDO_VT1)
## Authors:
[Frank Soboczenski](https://h21k.github.io/), King's College London, London, UK<br>
[Paul Wright](https://www.wrightai.com/), Wright AI Ltd, Leeds, UK
## General:
This Vision Transformer model has been fine-tuned on Solar Dynamics Observatory (SDO) data. The images used are available here:
[Solar Dynamics Observatory Gallery](https://sdo.gsfc.nasa.gov/gallery/main). This is a Vision Transformer model fine-tuned on SDO data in an active region classification task. We aim to highlight the ease of use of the HuggingFace platform, integration with popular deep learning frameworks such as PyTorch, TensorFlow, or JAX, performance monitoring with Weights and Biases, and the ability to effortlessly utilize pre-trained large scale Transformer models for targeted fine-tuning purposes. This is to our knowledge the first Vision Transformer model on NASA SDO mission data and we are working on additional versions to address challenges in this domain.
<b>The data used was provided courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
The authors gratefully acknowledge the entire NASA Solar Dynamics Observatory Mission Team.</b><br>
For the SDO team: this model is the first version for demonstration purposes. It is only trained on the SDO Gallery data atm and we're working on additional data.
We will include more technical details here soon.
## Example Images
--> Drag one of the images below into the inference API field on the upper right.
Additional images for testing can be found at:
[Solar Dynamics Observatory Gallery](https://sdo.gsfc.nasa.gov/gallery/main)
You can use the following tags to further select images for testing:
"coronal holes", "loops" or "flares"
You can also choose "active regions" to get a general pool for testing.
### NASA_SDO_Coronal_Hole

### NASA_SDO_Coronal_Loop

### NASA_SDO_Solar_Flare

## Training data
The ViT model was pretrained on a dataset consisting of 14 million images and 21k classes ([ImageNet-21k](http://www.image-net.org/).
More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k)
## How to use this Model
(quick snippet to work on Google Colab - comment the pip install for local use if you have transformers already installed)
```python
!pip install transformers --quiet
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
from PIL import Image
import requests
url = 'https://sdo.gsfc.nasa.gov/assets/gallery/preview/211_coronalhole.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("kenobi/SDO_VT1")
model = AutoModelForImageClassification.from_pretrained("kenobi/SDO_VT1")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the three fine-tuned classes (NASA_SDO_Coronal_Hole, NASA_SDO_Coronal_Loop or NASA_SDO_Solar_Flare)
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
## BibTeX & References
A publication on this work is currently in preparation. In the meantime, please refer to this model by using the following citation:
```
@misc{sdovt2022,
author = {Frank Soboczenski and Paul J Wright},
title = {SDOVT: A Vision Transformer Model for Solar Dynamics Observatory (SDO) Data},
url = {https://huggingface.co/kenobi/SDO_VT1/},
version = {1.0},
year = {2022},
}
```
For the base ViT model used please refer to:
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
For referring to Imagenet:
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
anita-clmnt/distilbert-base-uncased-finetuned-emotion | c0a5552ef0e033dcfb67ca4f762a1feaa502749b | 2022-06-22T18:17:24.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | anita-clmnt | null | anita-clmnt/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,592 | Entry not found |
deepesh0x/bert_wikipedia_sst2 | 3025e84049b0ade8b4251aab83165f16cb6a16fd | 2022-06-22T21:27:21.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:deepesh0x/autotrain-data-bert_wikipedia_sst2",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | deepesh0x | null | deepesh0x/bert_wikipedia_sst2 | 8 | null | transformers | 13,593 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-bert_wikipedia_sst2
co2_eq_emissions: 16.368556687663705
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1021934687
- CO2 Emissions (in grams): 16.368556687663705
## Validation Metrics
- Loss: 0.15712647140026093
- Accuracy: 0.9503340757238308
- Precision: 0.9515767251616308
- Recall: 0.9598083577322332
- AUC: 0.9857179850355002
- F1: 0.9556748161399324
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst2-1021934687
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst2-1021934687", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst2-1021934687", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
upsalite/bert-base-german-cased-finetuned-emotion-2-labels | 0801445b23bbe65eef4c70bf6038a57d89c390bd | 2022-07-05T12:50:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | upsalite | null | upsalite/bert-base-german-cased-finetuned-emotion-2-labels | 8 | null | transformers | 13,594 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-german-cased-finetuned-emotion-2-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-emotion-2-labels
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9788
- Accuracy: 0.835
- F1: 0.8345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6249 | 1.0 | 25 | 0.4990 | 0.775 | 0.7743 |
| 0.4072 | 2.0 | 50 | 0.4041 | 0.825 | 0.8250 |
| 0.2202 | 3.0 | 75 | 0.4166 | 0.84 | 0.8400 |
| 0.1028 | 4.0 | 100 | 0.4974 | 0.82 | 0.8191 |
| 0.0391 | 5.0 | 125 | 0.6061 | 0.79 | 0.7892 |
| 0.0175 | 6.0 | 150 | 0.6459 | 0.845 | 0.8449 |
| 0.0039 | 7.0 | 175 | 0.6933 | 0.84 | 0.8400 |
| 0.0033 | 8.0 | 200 | 0.7915 | 0.84 | 0.8396 |
| 0.001 | 9.0 | 225 | 0.9425 | 0.825 | 0.8250 |
| 0.0046 | 10.0 | 250 | 0.9074 | 0.82 | 0.82 |
| 0.001 | 11.0 | 275 | 0.9323 | 0.835 | 0.8348 |
| 0.0009 | 12.0 | 300 | 0.9144 | 0.84 | 0.8394 |
| 0.0003 | 13.0 | 325 | 0.9082 | 0.845 | 0.8450 |
| 0.0003 | 14.0 | 350 | 0.8913 | 0.84 | 0.8397 |
| 0.0003 | 15.0 | 375 | 0.9534 | 0.845 | 0.8450 |
| 0.0004 | 16.0 | 400 | 0.9498 | 0.835 | 0.8349 |
| 0.0027 | 17.0 | 425 | 0.9838 | 0.84 | 0.8400 |
| 0.0006 | 18.0 | 450 | 0.9853 | 0.845 | 0.8450 |
| 0.0003 | 19.0 | 475 | 0.9768 | 0.825 | 0.8243 |
| 0.0002 | 20.0 | 500 | 0.9788 | 0.835 | 0.8345 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
ryo0634/bert-base-zip-dependency-encoder-en | c9d1d1418e046f7553811f3e9737c60582ae1cc6 | 2022-06-23T11:43:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ryo0634 | null | ryo0634/bert-base-zip-dependency-encoder-en | 8 | null | transformers | 13,595 | Entry not found |
cambridgeltl/simctgt5_small_xsum | 6aba26df353f181e4d19456f438474fee2367250 | 2022-06-25T20:30:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | cambridgeltl | null | cambridgeltl/simctgt5_small_xsum | 8 | null | transformers | 13,596 | Entry not found |
Lvxue/distilled_t_1.5 | 366366d140565a3bae95f1d0bb63f3c9dfda091a | 2022-06-30T05:53:42.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Lvxue | null | Lvxue/distilled_t_1.5 | 8 | null | transformers | 13,597 | Average latency (ms) - 220.21 +\- 2.28
{'bleu': 4.90075699047093} |
cambridgeltl/mle_one_billion_word | 6e0dfa210a0511c1e62afa139a256a55146b780b | 2022-06-28T08:14:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | cambridgeltl | null | cambridgeltl/mle_one_billion_word | 8 | null | transformers | 13,598 | Entry not found |
xliu128/distilbert-base-uncased-finetuned-emotion | abfa04e3ae0771d39368e8dfaf233268d0c33115 | 2022-07-13T13:16:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | xliu128 | null | xliu128/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,599 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.924714869006902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.925
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8435 | 1.0 | 250 | 0.3160 | 0.9065 | 0.9045 |
| 0.2457 | 2.0 | 500 | 0.2168 | 0.925 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.