modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 06:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
YassineB/test_Resnet | YassineB | 2022-10-04T14:34:56Z | 41 | 0 | transformers | [
"transformers",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-04T14:10:44Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
--- |
esc-benchmark/conformer-rnnt-common_voice | esc-benchmark | 2022-10-04T14:31:55Z | 1 | 0 | nemo | [
"nemo",
"esc",
"en",
"dataset:common_voice",
"region:us"
]
| null | 2022-10-04T14:31:39Z | ---
language:
- en
tags:
- esc
datasets:
- common_voice
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esc-benchmark/esc-datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="common_voice" \
--output_dir="./" \
--run_name="conformer-rnnt-common-voice" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--max_eval_duration_in_seconds="20" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-benchmark/whisper-aed-switchboard | esc-benchmark | 2022-10-04T14:24:16Z | 0 | 0 | null | [
"esc",
"en",
"dataset:switchboard",
"region:us"
]
| null | 2022-10-04T14:23:58Z | ---
language:
- en
tags:
- esc
datasets:
- switchboard
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="switchboard" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-switchboard" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-switchboard" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-benchmark/whisper-aed-ami | esc-benchmark | 2022-10-04T14:17:38Z | 0 | 0 | null | [
"esc",
"en",
"dataset:ami",
"region:us"
]
| null | 2022-10-04T14:17:20Z | ---
language:
- en
tags:
- esc
datasets:
- ami
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="ami" \
--max_steps="2500" \
--output_dir="./" \
--run_name="whisper-ami" \
--dropout_rate="0.1" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-benchmark/whisper-aed-earnings22 | esc-benchmark | 2022-10-04T14:14:02Z | 0 | 0 | null | [
"esc",
"en",
"dataset:earnings22",
"region:us"
]
| null | 2022-10-04T14:13:44Z | ---
language:
- en
tags:
- esc
datasets:
- earnings22
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="earnings22" \
--max_steps="2500" \
--output_dir="./" \
--run_name="whisper-earnings22" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-benchmark/whisper-aed-gigaspeech | esc-benchmark | 2022-10-04T14:06:02Z | 0 | 0 | null | [
"esc",
"en",
"dataset:gigaspeech",
"region:us"
]
| null | 2022-10-04T14:05:45Z | ---
language:
- en
tags:
- esc
datasets:
- gigaspeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="gigaspeech" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-gigaspeech" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
melll-uff/bertweetbr | melll-uff | 2022-10-04T14:00:33Z | 379 | 10 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-29T21:10:30Z | ---
language: pt
license: apache-2.0
---
# <a name="introduction"></a> BERTweet.BR: A Pre-Trained Language Model for Tweets in Portuguese
Having the same architecture of [BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet) we trained our model from scratch following [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) pre-training procedure on a corpus of approximately 9GB containing 100M Portuguese Tweets.
## Usage
### Normalized Inputs
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('melll-uff/bertweetbr')
tokenizer = AutoTokenizer.from_pretrained('melll-uff/bertweetbr', normalization=False)
# INPUT TWEETS ALREADY NORMALIZED!
inputs = [
"Procuro um amor , que seja bom pra mim ... vou procurar , eu vou até o fim :nota_musical:",
"Que jogo ontem @USER :mãos_juntas:",
"Demojizer para Python é :polegar_para_cima: e está disponível em HTTPURL"]
encoded_inputs = tokenizer(inputs, return_tensors="pt", padding=True)
with torch.no_grad():
last_hidden_states = model(**encoded_inputs)
# CLS Token of last hidden states. Shape: (number of input sentences, hidden sizeof the model)
last_hidden_states[0][:,0,:]
tensor([[-0.1430, -0.1325, 0.1595, ..., -0.0802, -0.0153, -0.1358],
[-0.0108, 0.1415, 0.0695, ..., 0.1420, 0.1153, -0.0176],
[-0.1854, 0.1866, 0.3163, ..., -0.2117, 0.2123, -0.1907]])
```
### Normalize raw input Tweets
```python
from emoji import demojize
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('melll-uff/bertweetbr')
tokenizer = AutoTokenizer.from_pretrained('melll-uff/bertweetbr', normalization=True)
inputs = [
"Procuro um amor , que seja bom pra mim ... vou procurar , eu vou até o fim 🎵",
"Que jogo ontem @cristiano 🙏",
"Demojizer para Python é 👍 e está disponível em https://pypi.org/project/emoji/"]
tokenizer.demojizer = lambda x: demojize(x, language='pt')
[tokenizer.normalizeTweet(s) for s in inputs]
# Tokenizer first normalizes tweet sentences
['Procuro um amor , que seja bom pra mim ... vou procurar , eu vou até o fim :nota_musical:',
'Que jogo ontem @USER :mãos_juntas:',
'Demojizer para Python é :polegar_para_cima: e está disponível em HTTPURL']
encoded_inputs = tokenizer(inputs, return_tensors="pt", padding=True)
with torch.no_grad():
last_hidden_states = model(**encoded_inputs)
# CLS Token of last hidden states. Shape: (number of input sentences, hidden sizeof the model)
last_hidden_states[0][:,0,:]
tensor([[-0.1430, -0.1325, 0.1595, ..., -0.0802, -0.0153, -0.1358],
[-0.0108, 0.1415, 0.0695, ..., 0.1420, 0.1153, -0.0176],
[-0.1854, 0.1866, 0.3163, ..., -0.2117, 0.2123, -0.1907]])
```
### Mask Filling with Pipeline
```python
from transformers import pipeline
model_name = 'melll-uff/bertweetbr'
tokenizer = AutoTokenizer.from_pretrained('melll-uff/bertweetbr', normalization=False)
filler_mask = pipeline("fill-mask", model=model_name, tokenizer=tokenizer)
filler_mask("Rio é a <mask> cidade do Brasil.", top_k=5)
# Output
[{'sequence': 'Rio é a melhor cidade do Brasil.',
'score': 0.9871652126312256,
'token': 120,
'token_str': 'm e l h o r'},
{'sequence': 'Rio é a pior cidade do Brasil.',
'score': 0.005050931591540575,
'token': 316,
'token_str': 'p i o r'},
{'sequence': 'Rio é a maior cidade do Brasil.',
'score': 0.004420778248459101,
'token': 389,
'token_str': 'm a i o r'},
{'sequence': 'Rio é a minha cidade do Brasil.',
'score': 0.0021856199018657207,
'token': 38,
'token_str': 'm i n h a'},
{'sequence': 'Rio é a segunda cidade do Brasil.',
'score': 0.0002110043278662488,
'token': 667,
'token_str': 's e g u n d a'}]
``` |
esc-benchmark/whisper-aed-librispeech | esc-benchmark | 2022-10-04T13:49:46Z | 0 | 0 | null | [
"esc",
"en",
"dataset:librispeech",
"region:us"
]
| null | 2022-10-04T13:49:29Z | ---
language:
- en
tags:
- esc
datasets:
- librispeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="librispeech" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-librispeech" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-benchmark/wav2vec2-aed-chime4 | esc-benchmark | 2022-10-04T13:46:34Z | 4 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esc",
"en",
"dataset:chime4",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:46:20Z | ---
language:
- en
tags:
- esc
datasets:
- chime4
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esc-benchmark/esc-datasets" \
--model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \
--dataset_config_name="chime4" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-chime4" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="250" \
--final_generation_num_beams="5" \
--generation_length_penalty="0.6" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
nayan06/buy-others1 | nayan06 | 2022-10-04T13:46:25Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-04T13:46:14Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 100 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
esc-benchmark/wav2vec2-aed-switchboard | esc-benchmark | 2022-10-04T13:44:32Z | 4 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esc",
"en",
"dataset:switchboard",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:44:18Z | ---
language:
- en
tags:
- esc
datasets:
- switchboard
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esc-benchmark/esc-datasets" \
--model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \
--dataset_config_name="switchboard" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-switchboard" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="260" \
--final_generation_num_beams="5" \
--generation_length_penalty="0.8" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-aed-spgispeech | esc-benchmark | 2022-10-04T13:38:15Z | 5 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esc",
"en",
"dataset:spgispeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:38:01Z | ---
language:
- en
tags:
- esc
datasets:
- spgispeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esc-benchmark/esc-datasets" \
--model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \
--dataset_config_name="spgispeech" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-spgispeech" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="225" \
--final_generation_num_beams="14" \
--generation_length_penalty="1.6" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-aed-gigaspeech | esc-benchmark | 2022-10-04T13:35:54Z | 7 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esc",
"en",
"dataset:gigaspeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:35:41Z | ---
language:
- en
tags:
- esc
datasets:
- gigaspeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esc-benchmark/esc-datasets" \
--model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \
--dataset_config_name="gigaspeech" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-gigaspeech" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="200" \
--final_generation_num_beams="14" \
--generation_length_penalty="1.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-aed-voxpopuli | esc-benchmark | 2022-10-04T13:33:40Z | 5 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esc",
"en",
"dataset:voxpopuli",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:33:27Z | ---
language:
- en
tags:
- esc
datasets:
- voxpopuli
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esc-benchmark/esc-datasets" \
--model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \
--dataset_config_name="voxpopuli" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-voxpopuli" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="1" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="10001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="225" \
--final_generation_num_beams="5" \
--generation_length_penalty="0.8" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-aed-librispeech | esc-benchmark | 2022-10-04T13:27:46Z | 5 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esc",
"en",
"dataset:librispeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:27:32Z | ---
language:
- en
tags:
- esc
datasets:
- librispeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esc-benchmark/esc-datasets" \
--model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \
--dataset_config_name="librispeech" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-librispeech" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="300" \
--final_generation_num_beams="12" \
--generation_length_penalty="1.6" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-ctc-chime4 | esc-benchmark | 2022-10-04T13:23:26Z | 4 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esc",
"en",
"dataset:chime4",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:23:19Z | ---
language:
- en
tags:
- esc
datasets:
- chime4
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esc-benchmark/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-chime4-tokenizer" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="chime4" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-chime4" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-ctc-switchboard | esc-benchmark | 2022-10-04T13:22:21Z | 5 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esc",
"en",
"dataset:switchboard",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:22:13Z | ---
language:
- en
tags:
- esc
datasets:
- switchboard
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esc-benchmark/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-switchboard-tokenizer" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="switchboard" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-switchboard" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-benchmark/wav2vec2-ctc-ami | esc-benchmark | 2022-10-04T13:21:19Z | 4 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esc",
"en",
"dataset:ami",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:21:11Z | ---
language:
- en
tags:
- esc
datasets:
- ami
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esc-benchmark/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-ami-tokenizer" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="ami" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-ami" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
Akshata/autotrain-person-name-validity1-1655358687 | Akshata | 2022-10-04T13:17:17Z | 99 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"en",
"dataset:Akshata/autotrain-data-person-name-validity1",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-04T13:15:05Z | ---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Akshata/autotrain-data-person-name-validity1
co2_eq_emissions:
emissions: 0.015012024821802214
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1655358687
- CO2 Emissions (in grams): 0.0150
## Validation Metrics
- Loss: 0.038
- Accuracy: 0.991
- Precision: 0.000
- Recall: 0.000
- F1: 0.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Akshata/autotrain-person-name-validity1-1655358687
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Akshata/autotrain-person-name-validity1-1655358687", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Akshata/autotrain-person-name-validity1-1655358687", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
esc-benchmark/wav2vec2-ctc-gigaspeech | esc-benchmark | 2022-10-04T13:15:20Z | 4 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esc",
"en",
"dataset:gigaspeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T13:15:12Z | ---
language:
- en
tags:
- esc
datasets:
- gigaspeech
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esc-benchmark/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-gigaspeech-tokenizer" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="gigaspeech" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-gigaspeech" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
bharadwajkg/finetuning-cardiffnlp-twitter-roberta-base-sentiment | bharadwajkg | 2022-10-04T12:28:23Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-04T10:47:00Z | ---
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: finetuning-cardiffnlp-twitter-roberta-base-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.7433333333333333
- name: F1
type: f1
value: 0.7418048347838402
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-cardiffnlp-twitter-roberta-base-sentiment
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0244
- Accuracy: 0.7433
- F1: 0.7418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
alessandroen/asr_test | alessandroen | 2022-10-04T11:10:56Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-23T10:46:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: asr_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asr_test
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
bharadwajkg/finetuning-sentiment-model-3000-samples-imdb | bharadwajkg | 2022-10-04T10:26:13Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-04T10:12:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3054
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Nithiwat/mdeberta-v3-base_claim-detection | Nithiwat | 2022-10-04T10:24:41Z | 100 | 1 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"claim-detection",
"en",
"dataset:Nithiwat/claim-detection",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-04T07:42:30Z | ---
language:
- en
tags:
- text-classification
- claim-detection
license: "mit"
datasets:
- Nithiwat/claim-detection
widget:
- text: "This is the best cast iron skillet you will ever buy."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book"
--- |
openai/clip-vit-large-patch14-336 | openai | 2022-10-04T09:41:39Z | 4,489,460 | 223 | transformers | [
"transformers",
"pytorch",
"tf",
"clip",
"zero-shot-image-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
]
| zero-shot-image-classification | 2022-04-22T14:57:43Z | ---
tags:
- generated_from_keras_callback
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
model-index:
- name: clip-vit-large-patch14-336
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clip-vit-large-patch14-336
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.8.2
- Tokenizers 0.12.1
|
mouss/autotrain-damages-1652858619 | mouss | 2022-10-04T09:41:15Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:mouss/autotrain-data-damages",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-04T09:39:50Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- mouss/autotrain-data-damages
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.007316433431312107
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1652858619
- CO2 Emissions (in grams): 0.0073
## Validation Metrics
- Loss: 0.082
- Accuracy: 0.989
- Precision: 1.000
- Recall: 0.978
- AUC: 0.995
- F1: 0.989 |
GItaf/bert-base-uncased-bert-base-uncased-mc-weight1-epoch15 | GItaf | 2022-10-04T09:32:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-02T15:01:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-bert-base-uncased-mc-weight1-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-bert-base-uncased-mc-weight1-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8027
- Cls loss: 3.4449
- Lm loss: 4.3556
- Cls Accuracy: 0.5706
- Cls F1: 0.5697
- Cls Precision: 0.5753
- Cls Recall: 0.5706
- Perplexity: 77.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 6.9526 | 1.0 | 3470 | 6.3154 | 1.7748 | 4.5399 | 0.4991 | 0.4577 | 0.4421 | 0.4991 | 93.68 |
| 6.0876 | 2.0 | 6940 | 6.1427 | 1.6773 | 4.4643 | 0.5545 | 0.5342 | 0.5717 | 0.5545 | 86.86 |
| 5.7231 | 3.0 | 10410 | 6.0206 | 1.5955 | 4.4240 | 0.5902 | 0.5759 | 0.6020 | 0.5902 | 83.43 |
| 5.3877 | 4.0 | 13880 | 5.9857 | 1.5772 | 4.4073 | 0.6092 | 0.6031 | 0.6052 | 0.6092 | 82.05 |
| 5.1092 | 5.0 | 17350 | 6.3742 | 1.9981 | 4.3748 | 0.5942 | 0.5901 | 0.5964 | 0.5942 | 79.42 |
| 4.8504 | 6.0 | 20820 | 6.4511 | 2.0776 | 4.3737 | 0.5890 | 0.5875 | 0.6041 | 0.5890 | 79.34 |
| 4.6369 | 7.0 | 24290 | 6.9857 | 2.6268 | 4.3571 | 0.5827 | 0.5796 | 0.5979 | 0.5827 | 78.03 |
| 4.4667 | 8.0 | 27760 | 6.9550 | 2.6075 | 4.3458 | 0.5833 | 0.5831 | 0.5904 | 0.5833 | 77.16 |
| 4.3127 | 9.0 | 31230 | 7.2041 | 2.8518 | 4.3504 | 0.5902 | 0.5856 | 0.5935 | 0.5902 | 77.51 |
| 4.1777 | 10.0 | 34700 | 7.4233 | 3.0746 | 4.3467 | 0.5793 | 0.5770 | 0.5829 | 0.5793 | 77.22 |
| 4.0871 | 11.0 | 38170 | 7.4997 | 3.1488 | 4.3489 | 0.5746 | 0.5749 | 0.5853 | 0.5746 | 77.39 |
| 3.9991 | 12.0 | 41640 | 7.6636 | 3.3113 | 4.3502 | 0.5602 | 0.5605 | 0.5676 | 0.5602 | 77.49 |
| 3.9461 | 13.0 | 45110 | 7.6065 | 3.2514 | 4.3530 | 0.5695 | 0.5690 | 0.5738 | 0.5695 | 77.71 |
| 3.9013 | 14.0 | 48580 | 7.7562 | 3.4017 | 4.3523 | 0.5787 | 0.5785 | 0.5823 | 0.5787 | 77.65 |
| 3.8731 | 15.0 | 52050 | 7.8027 | 3.4449 | 4.3556 | 0.5706 | 0.5697 | 0.5753 | 0.5706 | 77.91 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
flair/ner-multi-fast | flair | 2022-10-04T09:19:01Z | 198 | 6 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"de",
"nl",
"es",
"dataset:conll2003",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language:
- en
- de
- nl
- es
datasets:
- conll2003
widget:
- text: "George Washington ging nach Washington"
---
## 4-Language NER in Flair (English, German, Dutch and Spanish)
This is the fast 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French.
F1-Score: **91,51** (CoNLL-03 English), **85,72** (CoNLL-03 German revised), **86,22** (CoNLL-03 Dutch), **85,78** (CoNLL-03 Spanish)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-multi-fast")
# make example sentence in any of the four languages
sentence = Sentence("George Washington ging nach Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.9977)]
Span [5]: "Washington" [− Labels: LOC (0.9895)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the multi-language corpus
corpus: Corpus = MultiCorpus([
CONLL_03(), # English corpus
CONLL_03_GERMAN(), # German corpus
CONLL_03_DUTCH(), # Dutch corpus
CONLL_03_SPANISH(), # Spanish corpus
])
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# GloVe embeddings
WordEmbeddings('glove'),
# FastText embeddings
WordEmbeddings('de'),
# contextual string embeddings, forward
FlairEmbeddings('multi-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('multi-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-multi-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following papers when using this model.
```
@misc{akbik2019multilingual,
title={Multilingual sequence labeling with one model},
author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland}
booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop},
year = {2019}
}
```
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
|
sd-concepts-library/crb-surrealz | sd-concepts-library | 2022-10-04T08:59:07Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-04T08:59:02Z | ---
license: mit
---
### crb-surrealz on Stable Diffusion
This is the `<crbsurreal>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:












|
eunyounglee/wav2vec_korean | eunyounglee | 2022-10-04T05:53:52Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-04T03:25:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec_korean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec_korean
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.0
|
farzeen/ppo-LunarLander-v2 | farzeen | 2022-10-04T05:23:46Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-04T05:23:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 249.94 +/- 23.25
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anas-awadalla/gpt2-medium-span-head-finetuned-squad | anas-awadalla | 2022-10-04T05:08:07Z | 475 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-09-26T20:15:16Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-medium-span-head-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-span-head-finetuned-squad
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
Hasanmurad/banglabert-bert-finetuned-ner | Hasanmurad | 2022-10-04T05:07:40Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-04T04:43:17Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: banglabert-bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# banglabert-bert-finetuned-ner
This model is a fine-tuned version of [csebuetnlp/banglabert](https://huggingface.co/csebuetnlp/banglabert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9526
- Precision: 0.0143
- Recall: 0.0769
- F1: 0.0241
- Accuracy: 0.0143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 1 | 2.0085 | 0.0143 | 0.0769 | 0.0241 | 0.0143 |
| No log | 2.0 | 2 | 1.9711 | 0.0143 | 0.0769 | 0.0241 | 0.0143 |
| No log | 3.0 | 3 | 1.9526 | 0.0143 | 0.0769 | 0.0241 | 0.0143 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
KES/ENG-TEC | KES | 2022-10-04T05:04:39Z | 113 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Trinidadian Creole",
"Caribbean dialect",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-02T14:47:01Z | ---
tags:
- text2text-generation
- Trinidadian Creole
- Caribbean dialect
license: apache-2.0
---
# Standard English to Trinidad English Creole Translator
This model utilises T5-base pre-trained model. It was fine tuned using a custom dataset for translation of English to Trinidad English Creole. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library [Caribe](https://pypi.org/project/Caribe/).
___
# Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KES/ENG-TEC")
model = AutoModelForSeq2SeqLM.from_pretrained("KES/ENG-TEC")
text = "Where are you going now?"
inputs = tokenizer("eng:"+text, truncation=True, return_tensors='pt')
output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
translation=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(translation)) #translation: Weh yuh going now.
```
___
|
jon-fernandes/vit-base-patch16-224-finetuned-flower | jon-fernandes | 2022-10-04T05:00:26Z | 216 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-04T04:51:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/liminal-spaces-2-0 | sd-concepts-library | 2022-10-04T04:01:31Z | 0 | 4 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-04T04:01:19Z | ---
license: mit
---
### Liminal spaces 2.0 on Stable Diffusion
This is the `liminal image` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




















|
nousr/alien-diffusion | nousr | 2022-10-04T03:47:20Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-04T03:44:58Z | ---
license: creativeml-openrail-m
---
|
huggingtweets/whoisaddison | huggingtweets | 2022-10-04T03:10:08Z | 96 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-16T23:20:50Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1570620381324578816/UG-qT7hg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Addison Rae</div>
<div style="text-align: center; font-size: 14px;">@whoisaddison</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Addison Rae.
| Data | Addison Rae |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 473 |
| Short tweets | 957 |
| Tweets kept | 1774 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6p4jofae/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @whoisaddison's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ofab5t2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ofab5t2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/whoisaddison')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
GItaf/gpt2-gpt2-mc-weight0.25-epoch15 | GItaf | 2022-10-04T02:57:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-03T14:42:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight0.25-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight0.25-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8155
- Cls loss: 3.4105
- Lm loss: 3.9623
- Cls Accuracy: 0.6104
- Cls F1: 0.6054
- Cls Precision: 0.6110
- Cls Recall: 0.6104
- Perplexity: 52.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 4.6818 | 1.0 | 3470 | 4.4437 | 1.6250 | 4.0374 | 0.5274 | 0.4958 | 0.5329 | 0.5274 | 56.68 |
| 4.3828 | 2.0 | 6940 | 4.3628 | 1.4528 | 3.9994 | 0.6144 | 0.6088 | 0.6345 | 0.6144 | 54.56 |
| 4.2523 | 3.0 | 10410 | 4.3820 | 1.5899 | 3.9842 | 0.6092 | 0.6025 | 0.6382 | 0.6092 | 53.74 |
| 4.1442 | 4.0 | 13880 | 4.3954 | 1.6755 | 3.9763 | 0.6063 | 0.6010 | 0.6121 | 0.6063 | 53.32 |
| 4.0385 | 5.0 | 17350 | 4.4675 | 2.0051 | 3.9659 | 0.6150 | 0.6105 | 0.6194 | 0.6150 | 52.77 |
| 3.9513 | 6.0 | 20820 | 4.5223 | 2.2257 | 3.9654 | 0.6115 | 0.6049 | 0.6233 | 0.6115 | 52.74 |
| 3.8877 | 7.0 | 24290 | 4.5904 | 2.5003 | 3.9649 | 0.6012 | 0.5956 | 0.6049 | 0.6012 | 52.71 |
| 3.8367 | 8.0 | 27760 | 4.6320 | 2.6812 | 3.9612 | 0.6121 | 0.6061 | 0.6154 | 0.6121 | 52.52 |
| 3.7991 | 9.0 | 31230 | 4.6735 | 2.8534 | 3.9596 | 0.6104 | 0.6059 | 0.6139 | 0.6104 | 52.44 |
| 3.7697 | 10.0 | 34700 | 4.7126 | 3.0044 | 3.9610 | 0.6104 | 0.6063 | 0.6122 | 0.6104 | 52.51 |
| 3.7457 | 11.0 | 38170 | 4.7607 | 3.1961 | 3.9612 | 0.6133 | 0.6072 | 0.6182 | 0.6133 | 52.52 |
| 3.7265 | 12.0 | 41640 | 4.7927 | 3.3216 | 3.9617 | 0.6006 | 0.5951 | 0.6036 | 0.6006 | 52.55 |
| 3.7129 | 13.0 | 45110 | 4.7983 | 3.3431 | 3.9620 | 0.6104 | 0.6039 | 0.6133 | 0.6104 | 52.56 |
| 3.7016 | 14.0 | 48580 | 4.8061 | 3.3774 | 3.9612 | 0.6121 | 0.6059 | 0.6124 | 0.6121 | 52.52 |
| 3.6956 | 15.0 | 52050 | 4.8155 | 3.4105 | 3.9623 | 0.6104 | 0.6054 | 0.6110 | 0.6104 | 52.58 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/gpt2-gpt2-mc-weight1-epoch15 | GItaf | 2022-10-04T02:56:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-02T15:10:31Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight1-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight1-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.6876
- Cls loss: 3.7214
- Lm loss: 3.9640
- Cls Accuracy: 0.6040
- Cls F1: 0.5981
- Cls Precision: 0.6050
- Cls Recall: 0.6040
- Perplexity: 52.67
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 6.3006 | 1.0 | 3470 | 5.7550 | 1.7137 | 4.0417 | 0.5326 | 0.4990 | 0.4983 | 0.5326 | 56.92 |
| 5.5103 | 2.0 | 6940 | 5.5608 | 1.5450 | 4.0149 | 0.6075 | 0.6009 | 0.6160 | 0.6075 | 55.42 |
| 5.2167 | 3.0 | 10410 | 5.7608 | 1.7609 | 3.9988 | 0.5977 | 0.5917 | 0.6161 | 0.5977 | 54.53 |
| 4.9916 | 4.0 | 13880 | 5.8042 | 1.8106 | 3.9925 | 0.6035 | 0.5979 | 0.6063 | 0.6035 | 54.19 |
| 4.7224 | 5.0 | 17350 | 6.0519 | 2.0699 | 3.9807 | 0.6144 | 0.6100 | 0.6152 | 0.6144 | 53.56 |
| 4.4802 | 6.0 | 20820 | 6.3862 | 2.4050 | 3.9798 | 0.5948 | 0.5883 | 0.6071 | 0.5948 | 53.51 |
| 4.2926 | 7.0 | 24290 | 6.5793 | 2.6045 | 3.9733 | 0.5890 | 0.5819 | 0.5940 | 0.5890 | 53.16 |
| 4.1321 | 8.0 | 27760 | 6.8574 | 2.8865 | 3.9692 | 0.5977 | 0.5937 | 0.6047 | 0.5977 | 52.94 |
| 4.022 | 9.0 | 31230 | 7.1316 | 3.1624 | 3.9673 | 0.5948 | 0.5882 | 0.5980 | 0.5948 | 52.84 |
| 3.9255 | 10.0 | 34700 | 7.1732 | 3.2049 | 3.9664 | 0.6017 | 0.5985 | 0.6009 | 0.6017 | 52.79 |
| 3.8619 | 11.0 | 38170 | 7.3778 | 3.4104 | 3.9653 | 0.5994 | 0.5929 | 0.5994 | 0.5994 | 52.74 |
| 3.8141 | 12.0 | 41640 | 7.5111 | 3.5452 | 3.9638 | 0.5873 | 0.5834 | 0.5916 | 0.5873 | 52.66 |
| 3.7859 | 13.0 | 45110 | 7.6660 | 3.6998 | 3.9640 | 0.5960 | 0.5889 | 0.5976 | 0.5960 | 52.67 |
| 3.7628 | 14.0 | 48580 | 7.6558 | 3.6900 | 3.9636 | 0.5954 | 0.5899 | 0.5969 | 0.5954 | 52.65 |
| 3.7539 | 15.0 | 52050 | 7.6876 | 3.7214 | 3.9640 | 0.6040 | 0.5981 | 0.6050 | 0.6040 | 52.67 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
sd-concepts-library/chungus-poodl-pet | sd-concepts-library | 2022-10-04T02:56:02Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-04T02:55:50Z | ---
license: mit
---
### Chungus Poodl Pet on Stable Diffusion
This is the `<poodl-chungus-big>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
























































|
anas-awadalla/gpt2-large-span-head-finetuned-squad | anas-awadalla | 2022-10-04T01:27:34Z | 537 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-09-26T20:01:28Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-large-span-head-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-span-head-finetuned-squad
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
aaronryanmanuel/distilbert-base-uncased-finetuned-cola | aaronryanmanuel | 2022-10-04T01:21:22Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T15:38:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5229395497643199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8233
- Matthews Correlation: 0.5229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5367 | 1.0 | 535 | 0.5657 | 0.3638 |
| 0.3692 | 2.0 | 1070 | 0.5291 | 0.4912 |
| 0.2503 | 3.0 | 1605 | 0.5442 | 0.5038 |
| 0.1895 | 4.0 | 2140 | 0.7376 | 0.5112 |
| 0.1363 | 5.0 | 2675 | 0.8233 | 0.5229 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
waifu-research-department/Zero-Two | waifu-research-department | 2022-10-04T00:21:52Z | 0 | 3 | null | [
"region:us"
]
| null | 2022-10-03T01:35:48Z | # Description
Trainer: naotsue
Zero Two from Darling in the FranXX
# Dataset
>Training: 20 images
>Regularization: 300 images
# Info
>Model Used: Waifu Diffusion 1.3 (Epoch 6)
>Steps: 3000
>Keyword: ZERO-TWO (Use this in the prompt)
>Class Phrase: 1girl
 |
waifu-research-department/Ayanokouji-Kiyotaka | waifu-research-department | 2022-10-04T00:20:41Z | 0 | 2 | null | [
"region:us"
]
| null | 2022-10-03T01:35:12Z | # Description
Trainer: naotsue
Ayanokouji Kiyotaka from Classroom of the Elite
# Dataset
>Training: 20 images
>Regularization: 300 images
# Info
>Model Used: Waifu Diffusion 1.3 (Epoch 6)
>Steps: 3000
>Keyword: AYANOKOUJI (Use this in the prompt)
>Class Phrase: 1boy
 |
grantsl/distilbert-base-uncased-finetuned-emotion-2 | grantsl | 2022-10-04T00:16:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T23:32:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.8433
- F1: 0.8433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4095 | 1.0 | 875 | 0.3667 | 0.8353 | 0.8351 |
| 0.3348 | 2.0 | 1750 | 0.3608 | 0.8433 | 0.8433 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
MoseliMotsoehli/FishNet | MoseliMotsoehli | 2022-10-03T23:30:15Z | 0 | 1 | null | [
"license:openrail",
"region:us"
]
| null | 2022-10-02T20:37:48Z | ---
license: openrail
---
# <span style="color:blue">FishNet: AI For Fish Stock Estimation</span>
The attached model was trained on a 63000 dataset of fish images belonging to 163
species. First, we trained a detectron2 model to detect and segment fish and fiduciary markers on a board.
The detectron2 model was written in PyTorch, and the final classifier is a ResNet50 keras model.
Below is an example of how to use the 2 models.
## Packages
## Load Models
## load an image and transform
## run the segmentation model
## visualize
## classify the fish |
faisito/distilbert-base-uncased-finetuned-emotion | faisito | 2022-10-03T22:19:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T21:45:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.925520268497019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8237 | 1.0 | 250 | 0.3205 | 0.9045 | 0.9002 |
| 0.2539 | 2.0 | 500 | 0.2170 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
suresh-subramanian/autotrain-fake-news-1649058542 | suresh-subramanian | 2022-10-03T22:13:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:suresh-subramanian/autotrain-data-fake-news",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T22:08:00Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- suresh-subramanian/autotrain-data-fake-news
co2_eq_emissions:
emissions: 12.699762619910537
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1649058542
- CO2 Emissions (in grams): 12.6998
## Validation Metrics
- Loss: 0.624
- Accuracy: 0.637
- Precision: 1.000
- Recall: 0.020
- AUC: 0.652
- F1: 0.039
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058542
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058542", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058542", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
suresh-subramanian/autotrain-fake-news-1649058539 | suresh-subramanian | 2022-10-03T22:12:00Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:suresh-subramanian/autotrain-data-fake-news",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T22:07:19Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- suresh-subramanian/autotrain-data-fake-news
co2_eq_emissions:
emissions: 0.040297872306469855
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1649058539
- CO2 Emissions (in grams): 0.0403
## Validation Metrics
- Loss: 0.478
- Accuracy: 0.779
- Precision: 0.814
- Recall: 0.520
- AUC: 0.881
- F1: 0.635
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058539
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058539", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058539", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
suresh-subramanian/autotrain-fake-news-1649058538 | suresh-subramanian | 2022-10-03T22:11:11Z | 100 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:suresh-subramanian/autotrain-data-fake-news",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T22:07:12Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- suresh-subramanian/autotrain-data-fake-news
co2_eq_emissions:
emissions: 0.04097854185629584
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1649058538
- CO2 Emissions (in grams): 0.0410
## Validation Metrics
- Loss: 0.387
- Accuracy: 0.815
- Precision: 0.760
- Recall: 0.730
- AUC: 0.902
- F1: 0.745
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058538
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058538", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058538", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
allenai/scibert_scivocab_uncased | allenai | 2022-10-03T22:06:12Z | 691,053 | 137 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language: en
---
# SciBERT
This is the pretrained model presented in [SciBERT: A Pretrained Language Model for Scientific Text](https://www.aclweb.org/anthology/D19-1371/), which is a BERT model trained on scientific text.
The training corpus was papers taken from [Semantic Scholar](https://www.semanticscholar.org). Corpus size is 1.14M papers, 3.1B tokens. We use the full text of the papers in training, not just abstracts.
SciBERT has its own wordpiece vocabulary (scivocab) that's built to best match the training corpus. We trained cased and uncased versions.
Available models include:
* `scibert_scivocab_cased`
* `scibert_scivocab_uncased`
The original repo can be found [here](https://github.com/allenai/scibert).
If using these models, please cite the following paper:
```
@inproceedings{beltagy-etal-2019-scibert,
title = "SciBERT: A Pretrained Language Model for Scientific Text",
author = "Beltagy, Iz and Lo, Kyle and Cohan, Arman",
booktitle = "EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1371"
}
```
|
allenai/scibert_scivocab_cased | allenai | 2022-10-03T22:04:46Z | 6,859 | 14 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language: en
---
# SciBERT
This is the pretrained model presented in [SciBERT: A Pretrained Language Model for Scientific Text](https://www.aclweb.org/anthology/D19-1371/), which is a BERT model trained on scientific text.
The training corpus was papers taken from [Semantic Scholar](https://www.semanticscholar.org). Corpus size is 1.14M papers, 3.1B tokens. We use the full text of the papers in training, not just abstracts.
SciBERT has its own wordpiece vocabulary (scivocab) that's built to best match the training corpus. We trained cased and uncased versions.
Available models include:
* `scibert_scivocab_cased`
* `scibert_scivocab_uncased`
The original repo can be found [here](https://github.com/allenai/scibert).
If using these models, please cite the following paper:
```
@inproceedings{beltagy-etal-2019-scibert,
title = "SciBERT: A Pretrained Language Model for Scientific Text",
author = "Beltagy, Iz and Lo, Kyle and Cohan, Arman",
booktitle = "EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1371"
}
```
|
allenai/aspire-biencoder-compsci-spec | allenai | 2022-10-03T22:03:49Z | 116 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2111.08366",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-04-23T14:15:21Z | ---
language: en
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Spec` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SPECTER model**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the base encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-compsci-spec-full.zip`](https://drive.google.com/file/d/1AHtzyEpyn7DeFYOdt86ik4n0tGaG5kMC/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **computer science** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from computer science, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Performance here is reported on CSFCube (computer science/English). This is detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). CSFCube presents a finer-grained query via selected sentences in a query abstract based on which a finer-grained retrieval must be made from candidate abstracts. The bi-encoder above ignores the finer grained query sentences and uses the whole abstract - this presents a baseline in the paper.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-compsci-spec` (and `aspire-biencoder-compsci-spec-full`) is compared against `allenai/specter`. `aspire-biencoder-compsci-spec-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-compsci-spec` and `aspire-biencoder-compsci-spec-full` are the single best run among the 3 re-runs.
| | CSFCube aggregated | CSFCube aggregated|
|--------------------------------------------:|:---------:|:-------:|
| | MAP | NDCG%20 |
| `specter` | 34.23 | 53.28 |
| `aspire-biencoder-compsci-spec-full`<sup>*</sup> | 37.90 | 58.16 |
| `aspire-biencoder-compsci-spec` | 37.17 | 57.91 |
| `aspire-biencoder-compsci-spec-full` | 37.67 | 59.26 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-biomed-scib`](https://huggingface.co/allenai/aspire-biencoder-biomed-scib): If you wanted to run on biomedical papers.
|
allenai/aspire-biencoder-biomed-spec | allenai | 2022-10-03T22:03:21Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2111.08366",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-04-23T14:14:35Z | ---
language: en
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Spec` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SPECTER encoder**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the base encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-biomed-spec-full.zip`](https://drive.google.com/file/d/1MDCv9Fc33eP015HTWKi50WYXixh72h5c/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 1e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **biomedical** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-biomed-spec` (and `aspire-biencoder-biomed-spec-full`) is compared against `allenai/specter`. `aspire-biencoder-biomed-spec-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-biomed-spec` and `aspire-biencoder-biomed-spec-full` are the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62| 77.20 |
| `aspire-biencoder-biomed-spec-full`<sup>*</sup> | 28.59 | 60.07 | 61.43| 77.96 |
| `aspire-biencoder-biomed-spec` | 26.07 | 54.89 | 61.47| 78.34 |
| `aspire-biencoder-biomed-spec-full` | 28.87 | 60.47 | 61.69| 78.22 |
Note that the absence of linear mixing parameters in the `aspire-biencoder-biomed-spec` hurts performance substantially compared to `aspire-biencoder-biomed-spec-full` in TRECCOVID - this dataset contains a larger candidate set than RELISH (~9000 vs 60). Consider the more performant Alternative models below for usage.
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
[`aspire-biencoder-biomed-scib`](https://huggingface.co/allenai/aspire-biencoder-biomed-scib): This is an alternative bi-encoder model identical to the above model, except that it is initialized with SciBERT instead of SPECTER. The above model underperforms this model, `allenai/aspire-biencoder-biomed-scib` (even better, `aspire-biencoder-biomed-scib-full`) is recommended for use. |
anas-awadalla/gpt2-span-head-finetuned-squad | anas-awadalla | 2022-10-03T21:37:18Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-10-03T13:59:42Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-finetuned-squad
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
allenai/longformer-base-4096-extra.pos.embd.only | allenai | 2022-10-03T21:21:08Z | 341 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"longformer",
"en",
"arxiv:2004.05150",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language: en
---
# longformer-base-4096-extra.pos.embd.only
This model is similar to `longformer-base-4096` but it was pretrained to preserve RoBERTa weights by freezing all RoBERTa weights and only train the additional position embeddings.
### Citing
If you use `Longformer` in your research, please cite [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150).
```
@article{Beltagy2020Longformer,
title={Longformer: The Long-Document Transformer},
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
journal={arXiv:2004.05150},
year={2020},
}
```
`Longformer` is an open-source project developed by [the Allen Institute for Artificial Intelligence (AI2)](http://www.allenai.org).
AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering.
|
ryyyyyy/1 | ryyyyyy | 2022-10-03T21:21:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-03T21:21:00Z | ---
license: creativeml-openrail-m
---
|
Hitmewmusic/First | Hitmewmusic | 2022-10-03T21:20:52Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-03T21:20:38Z | git clone https://github.com/sd-webui/stable-diffusion-webui.git |
allenai/bidaf-elmo | allenai | 2022-10-03T21:18:47Z | 27 | 12 | allennlp | [
"allennlp",
"tensorboard",
"question-answering",
"en",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- allennlp
- question-answering
---
This is an implementation of the BiDAF model with ELMo embeddings. The basic layout is pretty simple: encode words as a combination of word embeddings and a character-level encoder, pass the word representations through a bi-LSTM/GRU, use a matrix of attentions to put question information into the passage word representations (this is the only part that is at all non-standard), pass this through another few layers of bi-LSTMs/GRUs, and do a softmax over span start and span end.
CAVEATS:
------
This model is based on ELMo. ELMo is not deterministic, meaning that you will see slight differences every time you run it. Also, ELMo likes to be warmed up, so we recommend processing dummy input before processing real workloads with it. |
allenai/aspire-contextualsentence-multim-biomed | allenai | 2022-10-03T21:17:27Z | 132 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2111.08366",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-04-23T14:15:56Z | ---
language: en
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `tsAspire` and represents the papers proposed multi-vector model for fine-grained scientific document similarity.
## Model Card
### Model description
This model is a BERT based multi-vector model trained for fine-grained similarity of biomedical papers. This model inputs the title and abstract of a paper and represents a paper with a contextual sentence vectors obtained by averaging the token representations of individual sentences - the whole title and abstract are encoded with cross-attention in the encoder block before obtaining sentence embeddings. The model is trained by minimizing an Wasserstein/Earth Movers Distance between sentence vectors for a pair of documents - in the process also learning a sparse alignment between sentences in both documents. Test time behavior ranks documents based on the Wasserstein Distance between all sentences of documents or a set of query sentences and a candidate documents sentences.
### Training data
The model is trained on pairs of co-cited papers with their sentences aligned by the co-citation context in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model, negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers. For example - the papers in brackets below are all co-cited and each pair of papers would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for fine-grained document similarity tasks in **biomedical** scientific text using multiple vectors per document. The model allows _multiple_ fine grained sentence-to-sentence similarities between documents. The model is well suited to an aspect conditional task formulation where a query might consist of sentence_s_ in a query document and candidates must be retrieved along the specified sentences. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as document or sentence level classification. Since the training data comes primarily from biomedical, performance on other domains may be poorer.
### How to use
This model can be used via the `transformers` library, and some additional code to compute contextual sentence vectors and to make multiple matches using optimal transport.
View example usage and sample document matches in the model github repo: [`examples/demo-contextualsentence-multim.ipynb`](https://github.com/allenai/aspire/blob/main/examples/demo-contextualsentence-multim.ipynb)
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. In using this model we rank documents by the Wasserstein distance between the query sentences and a candidates sentences.
### Evaluation results
The released model `aspire-contextualsentence-multim-biomed` is compared against `allenai/specter`. `aspire-contextualsentence-multim-biomed`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-contextualsentence-multim-biomed` is the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62 | 77.20 |
| `aspire-contextualsentence-multim-biomed`<sup>*</sup> | 30.92 | 62.23 | 62.57 | 78.95 |
| `aspire-contextualsentence-multim-biomed` | 31.25 | 62.99 | 62.24 | 78.65 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci): If you wanted to run on computer science papers and want to use a model trained to match _multiple_ sentences between documents.
[`aspire-contextualsentence-singlem-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-biomed): If you wanted to run on biomedical papers and want to use a model trained to match _single_ sentences between documents.
[`aspire-contextualsentence-singlem-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-compsci): If you wanted to run on computer science papers and want to use a model trained to match _single_ sentences between documents. |
allenai/aspire-biencoder-biomed-scib | allenai | 2022-10-03T21:12:46Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2111.08366",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-04-23T13:47:04Z | ---
language: en
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Scib` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SciBert model**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the SciBert encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-biomed-scib-full.zip`](https://drive.google.com/file/d/1X6S5qwaKUlI3N3RDQSG-tJCzMBWAnqxP/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **biomedical** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-biomed-scib` (and `aspire-biencoder-biomed-scib-full`) is compared against `allenai/specter`. `aspire-biencoder-biomed-scib-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-biomed-scib` and `aspire-biencoder-biomed-scib-full` are the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62| 77.20 |
| `aspire-biencoder-biomed-scib-full`<sup>*</sup> | 30.60 | 62.07 | 61.43| 78.01 |
| `aspire-biencoder-biomed-scib` | 30.74 | 60.16 | 61.52| 78.07 |
| `aspire-biencoder-biomed-scib-full` | 31.45 | 63.15 | 61.34| 77.89 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
[`aspire-biencoder-biomed-spec`](https://huggingface.co/allenai/aspire-biencoder-biomed-spec): This is an alternative bi-encoder model identical to the above model, except that it is initialized with `allenai/specter` instead of SciBert. This usually under-performs the model released here. |
allenai/naqanet | allenai | 2022-10-03T21:12:40Z | 19 | 2 | allennlp | [
"allennlp",
"tensorboard",
"question-answering",
"en",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- allennlp
- question-answering
widget:
- context: 'A reusable launch system (RLS, or reusable launch vehicle, RLV) is a launch
system which is capable of launching a payload into space more than once. This
contrasts with expendable launch systems, where each launch vehicle is launched
once and then discarded. No completely reusable orbital launch system has ever
been created. Two partially reusable launch systems were developed, the Space
Shuttle and Falcon 9. The Space Shuttle was partially reusable: the orbiter (which
included the Space Shuttle main engines and the Orbital Maneuvering System engines),
and the two solid rocket boosters were reused after several months of refitting
work for each launch. The external tank was discarded after each flight.'
text: How many partially reusable launch systems were developed?
example_title: Reusable launch systems
- context: Robotics is an interdisciplinary branch of engineering and science that
includes mechanical engineering, electrical engineering, computer science, and
others. Robotics deals with the design, construction, operation, and use of robots,
as well as computer systems for their control, sensory feedback, and information
processing. These technologies are used to develop machines that can substitute
for humans. Robots can be used in any situation and for any purpose, but today
many are used in dangerous environments (including bomb detection and de-activation),
manufacturing processes, or where humans cannot survive. Robots can take on any
form but some are made to resemble humans in appearance. This is said to help
in the acceptance of a robot in certain replicative behaviors usually performed
by people. Such robots attempt to replicate walking, lifting, speech, cognition,
and basically anything a human can do.
text: What do robots that resemble humans attempt to do?
example_title: Robots
- context: In the first quarter, the Bears drew first blood as kicker Robbie Gould
nailed a 22-yard field goal for the only score of the period. In the second quarter,
the Bears increased their lead with Gould nailing a 42-yard field goal. They increased
their lead with Cutler firing a 7-yard TD pass to tight end Greg Olsen. The Bears
then closed out the first half with Gould's 41-yard field goal. In the third quarter,
the Vikes started to rally with running back Adrian Peterson's 1-yard touchdown
run (with the extra point attempt blocked). The Bears increased their lead over
the Vikings with Cutler's 2-yard TD pass to tight end Desmond Clark. The Vikings
then closed out the quarter with quarterback Brett Favre firing a 6-yard TD pass
to tight end Visanthe Shiancoe. An exciting fourth quarter ensued. The Vikings
started out the quarter's scoring with kicker Ryan Longwell's 41-yard field goal,
along with Adrian Peterson's second 1-yard TD run. The Bears then responded with
Cutler firing a 20-yard TD pass to wide receiver Earl Bennett. The Vikings then
completed the remarkable comeback with Favre finding wide receiver Sidney Rice
on a 6-yard TD pass on 4th-and-goal with 15 seconds left in regulation. The Bears
then took a knee to force overtime. In overtime, the Bears won the toss and marched
down the field, stopping at the 35-yard line. However, the potential game-winning
45-yard field goal attempt by Gould went wide right, giving the Vikings a chance
to win. After an exchange of punts, the Vikings had the ball at the 26-yard line
with 11 minutes left in the period. On the first play of scrimmage, Favre fired
a screen pass to Peterson who caught it and went 16 yards, before being confronted
by Hunter Hillenmeyer, who caused Peterson to fumble the ball, which was then
recovered by Bears' linebacker Nick Roach. The Bears then won on Jay Cutler's
game-winning 39-yard TD pass to wide receiver Devin Aromashodu. With the loss,
not only did the Vikings fall to 11-4, they also surrendered homefield advantage
to the Saints.
text: Who threw the longest touchdown pass of the game?
example_title: Argmax
- context: Hoping to rebound from their loss to the Patriots, the Raiders stayed at
home for a Week 16 duel with the Houston Texans. Oakland would get the early
lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard
touchdown pass to rookie wide receiver Chaz Schilens. The Texans would respond
with fullback Vonta Leach getting a 1-yard touchdown run, yet the Raiders would
answer with kicker Sebastian Janikowski getting a 33-yard and a 30-yard field
goal. Houston would tie the game in the second quarter with kicker Kris Brown
getting a 53-yard and a 24-yard field goal. Oakland would take the lead in the
third quarter with wide receiver Johnnie Lee Higgins catching a 29-yard touchdown
pass from Russell, followed up by an 80-yard punt return for a touchdown. The
Texans tried to rally in the fourth quarter as Brown nailed a 40-yard field goal,
yet the Raiders' defense would shut down any possible attempt.
text: How many yards was the longest passing touchdown?
example_title: Max
- context: In 1085, Guadalajara was retaken by the Christian forces of Alfonso VI
. The chronicles say that the Christian army was led by Alvar Fanez de Minaya,
one of the lieutenants of El Cid. From 1085 until the Battle of Las Navas de Tolosa
in 1212, the city suffered wars against the Almoravid and the Almohad Empires.
In spite of the wars, the Christian population could definitely settle down in
the area thanks to the repopulation with people from the North who received their
first fuero in 1133 from Alfonso VII.In 1219, the king Fernando III gave a new
fuero to the city .During the reign of Alfonso X of Castile, the protection of
the king allowed the city to develop its economy by protecting merchants and allowing
markets.
text: How many years did the city suffer wars against Almoravid and the Almohad
Empires?
example_title: Arithmetic
---
An augmented version of QANet that adds rudimentary numerical reasoning ability, trained on DROP (Dua et al., 2019), as published in the original DROP paper.
An augmented version of QANet model with some rudimentary numerical reasoning abilities. The main idea here is that instead of just predicting a passage span after doing all of the QANet modeling stuff, we add several different ‘answer abilities’: predicting a span from the question, predicting a count, or predicting an arithmetic expression. Near the end of the QANet model, we have a variable that predicts what kind of answer type we need, and each branch has separate modeling logic to predict that answer type. We then marginalize over all possible ways of getting to the right answer through each of these answer types. |
allenai/transformer_qa | allenai | 2022-10-03T21:12:21Z | 12 | 3 | allennlp | [
"allennlp",
"tensorboard",
"question-answering",
"en",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- allennlp
- question-answering
---
A reading comprehension model patterned after the proposed model in Devlin et al, with improvements borrowed from the SQuAD model in the transformers project
The model implements a reading comprehension model patterned after the proposed model in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al, 2018), with improvements borrowed from the SQuAD model in the transformers project. It predicts start tokens and end tokens with a linear layer on top of word piece embeddings. |
misterkilgore/distilbert-base-uncased-finetuned-disaster-tweet | misterkilgore | 2022-10-03T20:09:45Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-30T12:51:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-disaster-tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-disaster-tweet
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4052
- Accuracy: 0.8207
- F1: 0.8203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5056 | 1.0 | 96 | 0.4139 | 0.8188 | 0.8179 |
| 0.3991 | 2.0 | 192 | 0.4052 | 0.8207 | 0.8203 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Arklyn/fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-1 | Arklyn | 2022-10-03T20:00:31Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-03T07:36:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2114
- Wer: 0.1762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 36
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9372 | 4.0 | 460 | 2.8359 | 1.0 |
| 2.2755 | 8.0 | 920 | 0.3708 | 0.3983 |
| 0.6614 | 12.0 | 1380 | 0.2433 | 0.2636 |
| 0.5071 | 16.0 | 1840 | 0.2347 | 0.2395 |
| 0.4516 | 20.0 | 2300 | 0.2213 | 0.2185 |
| 0.4206 | 24.0 | 2760 | 0.2222 | 0.2008 |
| 0.3844 | 28.0 | 3220 | 0.2072 | 0.1887 |
| 0.3678 | 32.0 | 3680 | 0.2071 | 0.1886 |
| 0.3565 | 36.0 | 4140 | 0.2015 | 0.1851 |
| 0.3388 | 40.0 | 4600 | 0.2137 | 0.1850 |
| 0.3235 | 44.0 | 5060 | 0.2072 | 0.1791 |
| 0.3173 | 48.0 | 5520 | 0.2095 | 0.1777 |
| 0.3088 | 52.0 | 5980 | 0.2102 | 0.1784 |
| 0.3 | 56.0 | 6440 | 0.2164 | 0.1772 |
| 0.2957 | 60.0 | 6900 | 0.2114 | 0.1762 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
neelrr/xlm-roberta-base-finetuned-panx-ta | neelrr | 2022-10-03T18:57:36Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-03T18:52:53Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-ta
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.ta
metrics:
- name: F1
type: f1
value: 0.8144578313253013
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- F1: 0.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5477 | 1.0 | 209 | 0.2732 | 0.7305 |
| 0.2506 | 2.0 | 418 | 0.2425 | 0.7626 |
| 0.168 | 3.0 | 627 | 0.2183 | 0.8145 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
neelrr/xlm-roberta-base-finetuned-panx-hi-mr | neelrr | 2022-10-03T18:44:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-03T18:37:10Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-hi-mr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-hi-mr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1942
- F1: 0.8710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4628 | 1.0 | 417 | 0.2603 | 0.8062 |
| 0.2064 | 2.0 | 834 | 0.1951 | 0.8492 |
| 0.1289 | 3.0 | 1251 | 0.1942 | 0.8710 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jinhybr/text-summarization-t5base-xsum | jinhybr | 2022-10-03T18:42:32Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-29T13:58:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.282
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.282
- Rouge2: 7.6989
- Rougel: 22.2019
- Rougelsum: 22.197
- Gen Len: 18.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 2.7189 | 1.0 | 12753 | 2.4789 | 28.282 | 7.6989 | 22.2019 | 22.197 | 18.8238 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
albertdestajo/distilbert-base-uncased-finetuned-sst2 | albertdestajo | 2022-10-03T18:39:23Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-27T20:09:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9025229357798165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2991
- Accuracy: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.227 | 1.0 | 4210 | 0.2991 | 0.9025 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
radhe2205/finetuning-sentiment-model-3000-samples | radhe2205 | 2022-10-03T18:34:18Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T18:25:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.6566666666666666
- name: F1
type: f1
value: 0.6979472140762463
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7339
- Accuracy: 0.6567
- F1: 0.6979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
mofyrt/finetuning-sentiment-model-5000-samples | mofyrt | 2022-10-03T18:31:16Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T18:00:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-5000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9104
- name: F1
type: f1
value: 0.9115673114883537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5802
- Accuracy: 0.9104
- F1: 0.9116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ad7/finetuning-sentiment-model-3000-samples | ad7 | 2022-10-03T18:18:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T18:08:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.77
- name: F1
type: f1
value: 0.7561837455830389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6587
- Accuracy: 0.77
- F1: 0.7562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
eligabel/finetuning-sentiment-model-3000-samples | eligabel | 2022-10-03T18:12:07Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T18:02:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8166666666666667
- name: F1
type: f1
value: 0.8307692307692307
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6069
- Accuracy: 0.8167
- F1: 0.8308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ilex/finetuning-sentiment-model-3000-samples | ilex | 2022-10-03T18:10:45Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T18:01:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.7533333333333333
- name: F1
type: f1
value: 0.7797619047619048
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6197
- Accuracy: 0.7533
- F1: 0.7798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
chenyueg/finetuning-sentiment-model-3000-samples | chenyueg | 2022-10-03T18:02:20Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T04:48:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.6566666666666666
- name: F1
type: f1
value: 0.6555183946488293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6751
- Accuracy: 0.6567
- F1: 0.6555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Bakuraza/FirstLunarLanding | Bakuraza | 2022-10-03T17:16:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-03T17:16:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 229.59 +/- 12.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
model-attribution-challenge/codegen-350M-multi | model-attribution-challenge | 2022-10-03T16:18:49Z | 108 | 2 | transformers | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T13:36:04Z | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Multi 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 350M** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
jschmock/gewerke | jschmock | 2022-10-03T15:49:04Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-27T10:30:34Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gewerke
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gewerke
This model is a fine-tuned version of [svalabs/gbert-large-zeroshot-nli](https://huggingface.co/svalabs/gbert-large-zeroshot-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0089
- F1: 0.9974
## Label-Übersetzung
- 0 Abwasser-Wasser-Gasanlagen
- 1 Andere Anlagen
- 2 Gebäudeautomation
- 3 Kälteanlagen
- 4 Lufttechnische Anlagen
- 5 Starkstromanlagen
- 6 Wärmeversorungsanlagen
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 90 | 0.0444 | 0.9886 |
| No log | 1.99 | 180 | 0.0182 | 0.9947 |
| No log | 2.99 | 270 | 0.0103 | 0.9974 |
| No log | 3.99 | 360 | 0.0152 | 0.9946 |
| No log | 4.99 | 450 | 0.0089 | 0.9974 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/jang-sung-rak-style | sd-concepts-library | 2022-10-03T15:43:16Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-03T15:43:05Z | ---
license: mit
---
### Jang-Sung-Rak-Style on Stable Diffusion
This is the `<Jang-Sung-Rak-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
GItaf/gpt2-gpt2-mc-weight2-epoch15 | GItaf | 2022-10-03T14:40:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-03T06:53:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight2-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight2-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.3067
- Cls loss: 3.6680
- Lm loss: 3.9663
- Cls Accuracy: 0.6069
- Cls F1: 0.6023
- Cls Precision: 0.6050
- Cls Recall: 0.6069
- Perplexity: 52.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 8.1971 | 1.0 | 3470 | 7.6018 | 1.7777 | 4.0471 | 0.5251 | 0.4892 | 0.5282 | 0.5251 | 57.23 |
| 7.0508 | 2.0 | 6940 | 7.2679 | 1.6195 | 4.0269 | 0.6058 | 0.6001 | 0.6226 | 0.6058 | 56.08 |
| 6.5979 | 3.0 | 10410 | 7.4637 | 1.7253 | 4.0112 | 0.6184 | 0.6109 | 0.6367 | 0.6184 | 55.21 |
| 6.1811 | 4.0 | 13880 | 7.8253 | 1.9084 | 4.0063 | 0.6121 | 0.6074 | 0.6207 | 0.6121 | 54.94 |
| 5.7242 | 5.0 | 17350 | 8.1576 | 2.0824 | 3.9903 | 0.6219 | 0.6166 | 0.6195 | 0.6219 | 54.07 |
| 5.2588 | 6.0 | 20820 | 8.9472 | 2.4785 | 3.9872 | 0.6006 | 0.5942 | 0.6169 | 0.6006 | 53.90 |
| 4.8854 | 7.0 | 24290 | 9.2399 | 2.6264 | 3.9840 | 0.6063 | 0.5988 | 0.6123 | 0.6063 | 53.73 |
| 4.5785 | 8.0 | 27760 | 9.7123 | 2.8660 | 3.9768 | 0.6121 | 0.6076 | 0.6124 | 0.6121 | 53.35 |
| 4.3508 | 9.0 | 31230 | 10.2550 | 3.1400 | 3.9712 | 0.6086 | 0.6017 | 0.6121 | 0.6086 | 53.05 |
| 4.1817 | 10.0 | 34700 | 10.3110 | 3.1681 | 3.9711 | 0.6058 | 0.5990 | 0.6047 | 0.6058 | 53.04 |
| 4.0546 | 11.0 | 38170 | 11.0526 | 3.5396 | 3.9693 | 0.5988 | 0.5929 | 0.5997 | 0.5988 | 52.95 |
| 3.9481 | 12.0 | 41640 | 11.0193 | 3.5238 | 3.9675 | 0.6086 | 0.6038 | 0.6049 | 0.6086 | 52.85 |
| 3.9008 | 13.0 | 45110 | 11.2499 | 3.6394 | 3.9669 | 0.6121 | 0.6073 | 0.6090 | 0.6121 | 52.82 |
| 3.8558 | 14.0 | 48580 | 11.3606 | 3.6948 | 3.9666 | 0.6063 | 0.6000 | 0.6034 | 0.6063 | 52.81 |
| 3.8297 | 15.0 | 52050 | 11.3067 | 3.6680 | 3.9663 | 0.6069 | 0.6023 | 0.6050 | 0.6069 | 52.79 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
ironbar/stable_diffusion_gmartinez | ironbar | 2022-10-03T13:50:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-03T13:29:06Z | Stable-diffusion fine-tuned with this [repo](https://github.com/JoePenna/Dreambooth-Stable-Diffusion/) to create images from my friend Gonzalo.
 |
zannabethl/opus-mt-en-ro-finetuned-en-to-ro | zannabethl | 2022-10-03T13:00:35Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-03T12:22:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
damilojohn/Bert2BertForTextDescrambling | damilojohn | 2022-10-03T12:40:40Z | 111 | 1 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-03T00:44:12Z | ---
license: apache-2.0
---
This model receives scrambled or incoherent sentences as input and returns a meaningful sentence using the same words in the input . A form of grammar correction if you may .
It was trained on a dataset of permutated sentences derived from wikipedia pages as input with the correct arrangement of words as labels .
It is an encoder-decoder model that uses BERT's weight in both it's encoder and decoder .
|
cw1521/opus-mt-st-en | cw1521 | 2022-10-03T12:36:01Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-29T01:37:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-st-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-st-en
This model is a fine-tuned version of [cw1521/opus-mt-st-en](https://huggingface.co/cw1521/opus-mt-st-en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
cw1521/opus-mt-en-st | cw1521 | 2022-10-03T12:31:35Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-29T02:29:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-st
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-st
This model is a fine-tuned version of [cw1521/opus-mt-en-st](https://huggingface.co/cw1521/opus-mt-en-st) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3710
- Bleu: 77.1725
- Gen Len: 60.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3768 | 1.0 | 969 | 0.3710 | 77.1725 | 60.3696 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/aadhav-face | sd-concepts-library | 2022-10-03T12:22:50Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-03T12:22:47Z | ---
license: mit
---
### aadhav face on Stable Diffusion
This is the `<aadhav-face>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit | Muennighoff | 2022-10-03T12:16:09Z | 828 | 6 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:04Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-5.8B-weightedmean-nli-bitfit
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 74.07462686567165
- type: ap
value: 37.44692407529112
- type: f1
value: 68.28971003916419
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 66.63811563169165
- type: ap
value: 78.57252079915924
- type: f1
value: 64.5543087846584
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 77.21889055472263
- type: ap
value: 25.663426367826712
- type: f1
value: 64.26265688503176
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.06209850107067
- type: ap
value: 14.028219107023915
- type: f1
value: 48.10387189660778
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 82.30920000000002
- type: ap
value: 76.88786578621213
- type: f1
value: 82.15455656065011
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 41.584
- type: f1
value: 41.203137944390114
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.288000000000004
- type: f1
value: 34.672995558518096
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 38.34
- type: f1
value: 37.608755629529455
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 37.839999999999996
- type: f1
value: 36.86898201563507
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 30.936000000000003
- type: f1
value: 30.49401738527071
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 33.75
- type: f1
value: 33.38338946025617
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 13.727
- type: map_at_10
value: 26.740000000000002
- type: map_at_100
value: 28.218
- type: map_at_1000
value: 28.246
- type: map_at_3
value: 21.728
- type: map_at_5
value: 24.371000000000002
- type: ndcg_at_1
value: 13.727
- type: ndcg_at_10
value: 35.07
- type: ndcg_at_100
value: 41.947
- type: ndcg_at_1000
value: 42.649
- type: ndcg_at_3
value: 24.484
- type: ndcg_at_5
value: 29.282999999999998
- type: precision_at_1
value: 13.727
- type: precision_at_10
value: 6.223
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.835
- type: precision_at_5
value: 8.848
- type: recall_at_1
value: 13.727
- type: recall_at_10
value: 62.233000000000004
- type: recall_at_100
value: 93.67
- type: recall_at_1000
value: 99.14699999999999
- type: recall_at_3
value: 32.504
- type: recall_at_5
value: 44.239
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 40.553923271901695
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 32.49323183712211
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.89811361443445
- type: mrr
value: 70.16235764850724
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 82.50506557805856
- type: cos_sim_spearman
value: 79.50000423261176
- type: euclidean_pearson
value: 75.76190885392926
- type: euclidean_spearman
value: 76.7330737163434
- type: manhattan_pearson
value: 75.825318036112
- type: manhattan_spearman
value: 76.7415076434559
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (de-en)
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 75.49060542797494
- type: f1
value: 75.15379262352123
- type: precision
value: 74.99391092553932
- type: recall
value: 75.49060542797494
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (fr-en)
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.4182258419546555
- type: f1
value: 0.4182258419546555
- type: precision
value: 0.4182258419546555
- type: recall
value: 0.4182258419546555
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (ru-en)
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.013855213023900243
- type: f1
value: 0.0115460108532502
- type: precision
value: 0.010391409767925183
- type: recall
value: 0.013855213023900243
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.315955766192733
- type: f1
value: 0.315955766192733
- type: precision
value: 0.315955766192733
- type: recall
value: 0.315955766192733
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 81.74025974025973
- type: f1
value: 81.66568824876
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.59451202614059
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 29.128241446157165
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.715
- type: map_at_10
value: 35.007
- type: map_at_100
value: 36.352000000000004
- type: map_at_1000
value: 36.51
- type: map_at_3
value: 32.257999999999996
- type: map_at_5
value: 33.595000000000006
- type: ndcg_at_1
value: 33.906
- type: ndcg_at_10
value: 40.353
- type: ndcg_at_100
value: 45.562999999999995
- type: ndcg_at_1000
value: 48.454
- type: ndcg_at_3
value: 36.349
- type: ndcg_at_5
value: 37.856
- type: precision_at_1
value: 33.906
- type: precision_at_10
value: 7.854
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 17.549
- type: precision_at_5
value: 12.561
- type: recall_at_1
value: 26.715
- type: recall_at_10
value: 49.508
- type: recall_at_100
value: 71.76599999999999
- type: recall_at_1000
value: 91.118
- type: recall_at_3
value: 37.356
- type: recall_at_5
value: 41.836
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.663
- type: map_at_10
value: 27.086
- type: map_at_100
value: 28.066999999999997
- type: map_at_1000
value: 28.18
- type: map_at_3
value: 24.819
- type: map_at_5
value: 26.332
- type: ndcg_at_1
value: 25.732
- type: ndcg_at_10
value: 31.613999999999997
- type: ndcg_at_100
value: 35.757
- type: ndcg_at_1000
value: 38.21
- type: ndcg_at_3
value: 28.332
- type: ndcg_at_5
value: 30.264000000000003
- type: precision_at_1
value: 25.732
- type: precision_at_10
value: 6.038
- type: precision_at_100
value: 1.034
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 13.864
- type: precision_at_5
value: 10.241999999999999
- type: recall_at_1
value: 19.663
- type: recall_at_10
value: 39.585
- type: recall_at_100
value: 57.718
- type: recall_at_1000
value: 74.26700000000001
- type: recall_at_3
value: 29.845
- type: recall_at_5
value: 35.105
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 30.125
- type: map_at_10
value: 39.824
- type: map_at_100
value: 40.935
- type: map_at_1000
value: 41.019
- type: map_at_3
value: 37.144
- type: map_at_5
value: 38.647999999999996
- type: ndcg_at_1
value: 34.922
- type: ndcg_at_10
value: 45.072
- type: ndcg_at_100
value: 50.046
- type: ndcg_at_1000
value: 51.895
- type: ndcg_at_3
value: 40.251
- type: ndcg_at_5
value: 42.581
- type: precision_at_1
value: 34.922
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 1.0739999999999998
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 17.994
- type: precision_at_5
value: 12.475999999999999
- type: recall_at_1
value: 30.125
- type: recall_at_10
value: 57.253
- type: recall_at_100
value: 79.35799999999999
- type: recall_at_1000
value: 92.523
- type: recall_at_3
value: 44.088
- type: recall_at_5
value: 49.893
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.298000000000002
- type: map_at_10
value: 21.479
- type: map_at_100
value: 22.387
- type: map_at_1000
value: 22.483
- type: map_at_3
value: 19.743
- type: map_at_5
value: 20.444000000000003
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 24.887
- type: ndcg_at_100
value: 29.544999999999998
- type: ndcg_at_1000
value: 32.417
- type: ndcg_at_3
value: 21.274
- type: ndcg_at_5
value: 22.399
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.932
- type: precision_at_100
value: 0.666
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 8.927
- type: precision_at_5
value: 6.056
- type: recall_at_1
value: 16.298000000000002
- type: recall_at_10
value: 34.031
- type: recall_at_100
value: 55.769000000000005
- type: recall_at_1000
value: 78.19500000000001
- type: recall_at_3
value: 23.799999999999997
- type: recall_at_5
value: 26.562
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.958
- type: map_at_10
value: 16.999
- type: map_at_100
value: 17.979
- type: map_at_1000
value: 18.112000000000002
- type: map_at_3
value: 15.010000000000002
- type: map_at_5
value: 16.256999999999998
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.985
- type: ndcg_at_100
value: 26.216
- type: ndcg_at_1000
value: 29.675
- type: ndcg_at_3
value: 17.28
- type: ndcg_at_5
value: 19.301
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.968
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 8.541
- type: precision_at_5
value: 6.468
- type: recall_at_1
value: 10.958
- type: recall_at_10
value: 29.903000000000002
- type: recall_at_100
value: 53.413
- type: recall_at_1000
value: 78.74799999999999
- type: recall_at_3
value: 19.717000000000002
- type: recall_at_5
value: 24.817
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 21.217
- type: map_at_10
value: 29.677
- type: map_at_100
value: 30.928
- type: map_at_1000
value: 31.063000000000002
- type: map_at_3
value: 26.611
- type: map_at_5
value: 28.463
- type: ndcg_at_1
value: 26.083000000000002
- type: ndcg_at_10
value: 35.217
- type: ndcg_at_100
value: 40.715
- type: ndcg_at_1000
value: 43.559
- type: ndcg_at_3
value: 30.080000000000002
- type: ndcg_at_5
value: 32.701
- type: precision_at_1
value: 26.083000000000002
- type: precision_at_10
value: 6.622
- type: precision_at_100
value: 1.115
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 14.629
- type: precision_at_5
value: 10.837
- type: recall_at_1
value: 21.217
- type: recall_at_10
value: 47.031
- type: recall_at_100
value: 70.378
- type: recall_at_1000
value: 89.704
- type: recall_at_3
value: 32.427
- type: recall_at_5
value: 39.31
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.274
- type: map_at_10
value: 26.398
- type: map_at_100
value: 27.711000000000002
- type: map_at_1000
value: 27.833000000000002
- type: map_at_3
value: 24.294
- type: map_at_5
value: 25.385
- type: ndcg_at_1
value: 24.886
- type: ndcg_at_10
value: 30.909
- type: ndcg_at_100
value: 36.941
- type: ndcg_at_1000
value: 39.838
- type: ndcg_at_3
value: 27.455000000000002
- type: ndcg_at_5
value: 28.828
- type: precision_at_1
value: 24.886
- type: precision_at_10
value: 5.6739999999999995
- type: precision_at_100
value: 1.0290000000000001
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 13.242
- type: precision_at_5
value: 9.292
- type: recall_at_1
value: 19.274
- type: recall_at_10
value: 39.643
- type: recall_at_100
value: 66.091
- type: recall_at_1000
value: 86.547
- type: recall_at_3
value: 29.602
- type: recall_at_5
value: 33.561
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.653666666666666
- type: map_at_10
value: 25.606666666666666
- type: map_at_100
value: 26.669333333333334
- type: map_at_1000
value: 26.795833333333334
- type: map_at_3
value: 23.43433333333333
- type: map_at_5
value: 24.609666666666666
- type: ndcg_at_1
value: 22.742083333333333
- type: ndcg_at_10
value: 29.978333333333335
- type: ndcg_at_100
value: 34.89808333333333
- type: ndcg_at_1000
value: 37.806583333333336
- type: ndcg_at_3
value: 26.223666666666674
- type: ndcg_at_5
value: 27.91033333333333
- type: precision_at_1
value: 22.742083333333333
- type: precision_at_10
value: 5.397083333333334
- type: precision_at_100
value: 0.9340000000000002
- type: precision_at_1000
value: 0.13691666666666663
- type: precision_at_3
value: 12.331083333333332
- type: precision_at_5
value: 8.805499999999999
- type: recall_at_1
value: 18.653666666666666
- type: recall_at_10
value: 39.22625000000001
- type: recall_at_100
value: 61.31049999999999
- type: recall_at_1000
value: 82.19058333333334
- type: recall_at_3
value: 28.517333333333333
- type: recall_at_5
value: 32.9565
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.07
- type: map_at_10
value: 21.509
- type: map_at_100
value: 22.335
- type: map_at_1000
value: 22.437
- type: map_at_3
value: 19.717000000000002
- type: map_at_5
value: 20.574
- type: ndcg_at_1
value: 18.865000000000002
- type: ndcg_at_10
value: 25.135999999999996
- type: ndcg_at_100
value: 29.483999999999998
- type: ndcg_at_1000
value: 32.303
- type: ndcg_at_3
value: 21.719
- type: ndcg_at_5
value: 23.039
- type: precision_at_1
value: 18.865000000000002
- type: precision_at_10
value: 4.263999999999999
- type: precision_at_100
value: 0.696
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 9.866999999999999
- type: precision_at_5
value: 6.902
- type: recall_at_1
value: 16.07
- type: recall_at_10
value: 33.661
- type: recall_at_100
value: 54.001999999999995
- type: recall_at_1000
value: 75.564
- type: recall_at_3
value: 23.956
- type: recall_at_5
value: 27.264
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.847
- type: map_at_10
value: 15.518
- type: map_at_100
value: 16.384
- type: map_at_1000
value: 16.506
- type: map_at_3
value: 14.093
- type: map_at_5
value: 14.868
- type: ndcg_at_1
value: 13.764999999999999
- type: ndcg_at_10
value: 18.766
- type: ndcg_at_100
value: 23.076
- type: ndcg_at_1000
value: 26.344
- type: ndcg_at_3
value: 16.150000000000002
- type: ndcg_at_5
value: 17.373
- type: precision_at_1
value: 13.764999999999999
- type: precision_at_10
value: 3.572
- type: precision_at_100
value: 0.6779999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 7.88
- type: precision_at_5
value: 5.712
- type: recall_at_1
value: 10.847
- type: recall_at_10
value: 25.141999999999996
- type: recall_at_100
value: 44.847
- type: recall_at_1000
value: 68.92099999999999
- type: recall_at_3
value: 17.721999999999998
- type: recall_at_5
value: 20.968999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.377
- type: map_at_10
value: 26.005
- type: map_at_100
value: 26.996
- type: map_at_1000
value: 27.116
- type: map_at_3
value: 23.712
- type: map_at_5
value: 24.859
- type: ndcg_at_1
value: 22.201
- type: ndcg_at_10
value: 30.635
- type: ndcg_at_100
value: 35.623
- type: ndcg_at_1000
value: 38.551
- type: ndcg_at_3
value: 26.565
- type: ndcg_at_5
value: 28.28
- type: precision_at_1
value: 22.201
- type: precision_at_10
value: 5.41
- type: precision_at_100
value: 0.88
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 12.531
- type: precision_at_5
value: 8.806
- type: recall_at_1
value: 18.377
- type: recall_at_10
value: 40.908
- type: recall_at_100
value: 63.563
- type: recall_at_1000
value: 84.503
- type: recall_at_3
value: 29.793999999999997
- type: recall_at_5
value: 34.144999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 20.246
- type: map_at_10
value: 27.528000000000002
- type: map_at_100
value: 28.78
- type: map_at_1000
value: 29.002
- type: map_at_3
value: 25.226
- type: map_at_5
value: 26.355
- type: ndcg_at_1
value: 25.099
- type: ndcg_at_10
value: 32.421
- type: ndcg_at_100
value: 37.2
- type: ndcg_at_1000
value: 40.693
- type: ndcg_at_3
value: 28.768
- type: ndcg_at_5
value: 30.23
- type: precision_at_1
value: 25.099
- type: precision_at_10
value: 6.245
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.767999999999999
- type: precision_at_5
value: 9.881
- type: recall_at_1
value: 20.246
- type: recall_at_10
value: 41.336
- type: recall_at_100
value: 63.098
- type: recall_at_1000
value: 86.473
- type: recall_at_3
value: 30.069000000000003
- type: recall_at_5
value: 34.262
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 14.054
- type: map_at_10
value: 20.25
- type: map_at_100
value: 21.178
- type: map_at_1000
value: 21.288999999999998
- type: map_at_3
value: 18.584999999999997
- type: map_at_5
value: 19.536
- type: ndcg_at_1
value: 15.527
- type: ndcg_at_10
value: 23.745
- type: ndcg_at_100
value: 28.610999999999997
- type: ndcg_at_1000
value: 31.740000000000002
- type: ndcg_at_3
value: 20.461
- type: ndcg_at_5
value: 22.072
- type: precision_at_1
value: 15.527
- type: precision_at_10
value: 3.882
- type: precision_at_100
value: 0.6930000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 9.181000000000001
- type: precision_at_5
value: 6.433
- type: recall_at_1
value: 14.054
- type: recall_at_10
value: 32.714
- type: recall_at_100
value: 55.723
- type: recall_at_1000
value: 79.72399999999999
- type: recall_at_3
value: 23.832
- type: recall_at_5
value: 27.754
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 6.122
- type: map_at_10
value: 11.556
- type: map_at_100
value: 12.998000000000001
- type: map_at_1000
value: 13.202
- type: map_at_3
value: 9.657
- type: map_at_5
value: 10.585
- type: ndcg_at_1
value: 15.049000000000001
- type: ndcg_at_10
value: 17.574
- type: ndcg_at_100
value: 24.465999999999998
- type: ndcg_at_1000
value: 28.511999999999997
- type: ndcg_at_3
value: 13.931
- type: ndcg_at_5
value: 15.112
- type: precision_at_1
value: 15.049000000000001
- type: precision_at_10
value: 5.831
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 10.749
- type: precision_at_5
value: 8.365
- type: recall_at_1
value: 6.122
- type: recall_at_10
value: 22.207
- type: recall_at_100
value: 47.08
- type: recall_at_1000
value: 70.182
- type: recall_at_3
value: 13.416
- type: recall_at_5
value: 16.672
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 4.672
- type: map_at_10
value: 10.534
- type: map_at_100
value: 14.798
- type: map_at_1000
value: 15.927
- type: map_at_3
value: 7.317
- type: map_at_5
value: 8.726
- type: ndcg_at_1
value: 36.5
- type: ndcg_at_10
value: 26.098
- type: ndcg_at_100
value: 29.215999999999998
- type: ndcg_at_1000
value: 36.254999999999995
- type: ndcg_at_3
value: 29.247
- type: ndcg_at_5
value: 27.692
- type: precision_at_1
value: 47.25
- type: precision_at_10
value: 22.625
- type: precision_at_100
value: 7.042
- type: precision_at_1000
value: 1.6129999999999998
- type: precision_at_3
value: 34.083000000000006
- type: precision_at_5
value: 29.5
- type: recall_at_1
value: 4.672
- type: recall_at_10
value: 15.638
- type: recall_at_100
value: 36.228
- type: recall_at_1000
value: 58.831
- type: recall_at_3
value: 8.578
- type: recall_at_5
value: 11.18
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 49.919999999999995
- type: f1
value: 45.37973678791632
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 25.801000000000002
- type: map_at_10
value: 33.941
- type: map_at_100
value: 34.73
- type: map_at_1000
value: 34.793
- type: map_at_3
value: 31.705
- type: map_at_5
value: 33.047
- type: ndcg_at_1
value: 27.933000000000003
- type: ndcg_at_10
value: 38.644
- type: ndcg_at_100
value: 42.594
- type: ndcg_at_1000
value: 44.352000000000004
- type: ndcg_at_3
value: 34.199
- type: ndcg_at_5
value: 36.573
- type: precision_at_1
value: 27.933000000000003
- type: precision_at_10
value: 5.603000000000001
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 14.171
- type: precision_at_5
value: 9.786999999999999
- type: recall_at_1
value: 25.801000000000002
- type: recall_at_10
value: 50.876
- type: recall_at_100
value: 69.253
- type: recall_at_1000
value: 82.907
- type: recall_at_3
value: 38.879000000000005
- type: recall_at_5
value: 44.651999999999994
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.142
- type: map_at_10
value: 13.841999999999999
- type: map_at_100
value: 14.960999999999999
- type: map_at_1000
value: 15.187000000000001
- type: map_at_3
value: 11.966000000000001
- type: map_at_5
value: 12.921
- type: ndcg_at_1
value: 18.364
- type: ndcg_at_10
value: 18.590999999999998
- type: ndcg_at_100
value: 24.153
- type: ndcg_at_1000
value: 29.104000000000003
- type: ndcg_at_3
value: 16.323
- type: ndcg_at_5
value: 17.000999999999998
- type: precision_at_1
value: 18.364
- type: precision_at_10
value: 5.216
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 10.751
- type: precision_at_5
value: 7.932
- type: recall_at_1
value: 9.142
- type: recall_at_10
value: 22.747
- type: recall_at_100
value: 44.585
- type: recall_at_1000
value: 75.481
- type: recall_at_3
value: 14.602
- type: recall_at_5
value: 17.957
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 18.677
- type: map_at_10
value: 26.616
- type: map_at_100
value: 27.605
- type: map_at_1000
value: 27.711999999999996
- type: map_at_3
value: 24.396
- type: map_at_5
value: 25.627
- type: ndcg_at_1
value: 37.352999999999994
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 38.423
- type: ndcg_at_1000
value: 40.947
- type: ndcg_at_3
value: 29.885
- type: ndcg_at_5
value: 31.874999999999996
- type: precision_at_1
value: 37.352999999999994
- type: precision_at_10
value: 7.539999999999999
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.938
- type: precision_at_5
value: 12.943
- type: recall_at_1
value: 18.677
- type: recall_at_10
value: 37.698
- type: recall_at_100
value: 55.354000000000006
- type: recall_at_1000
value: 72.255
- type: recall_at_3
value: 28.406
- type: recall_at_5
value: 32.357
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 74.3292
- type: ap
value: 68.30186110189658
- type: f1
value: 74.20709636944783
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 6.889000000000001
- type: map_at_10
value: 12.321
- type: map_at_100
value: 13.416
- type: map_at_1000
value: 13.525
- type: map_at_3
value: 10.205
- type: map_at_5
value: 11.342
- type: ndcg_at_1
value: 7.092
- type: ndcg_at_10
value: 15.827
- type: ndcg_at_100
value: 21.72
- type: ndcg_at_1000
value: 24.836
- type: ndcg_at_3
value: 11.393
- type: ndcg_at_5
value: 13.462
- type: precision_at_1
value: 7.092
- type: precision_at_10
value: 2.7969999999999997
- type: precision_at_100
value: 0.583
- type: precision_at_1000
value: 0.08499999999999999
- type: precision_at_3
value: 5.019
- type: precision_at_5
value: 4.06
- type: recall_at_1
value: 6.889000000000001
- type: recall_at_10
value: 26.791999999999998
- type: recall_at_100
value: 55.371
- type: recall_at_1000
value: 80.12899999999999
- type: recall_at_3
value: 14.573
- type: recall_at_5
value: 19.557
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 89.6374829001368
- type: f1
value: 89.20878379358307
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 84.54212454212454
- type: f1
value: 82.81080100037023
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.46430953969313
- type: f1
value: 86.00019824223267
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 81.31850923896022
- type: f1
value: 81.07860454762863
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 58.23234134098243
- type: f1
value: 56.63845098081841
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 72.28571428571429
- type: f1
value: 70.95796714592039
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 70.68171454628363
- type: f1
value: 52.57188062729139
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 60.521273598196665
- type: f1
value: 42.70492970339204
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 64.32288192128087
- type: f1
value: 45.97360620220273
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 58.67209520826808
- type: f1
value: 42.82844991304579
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 41.95769092864826
- type: f1
value: 28.914127631431263
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 55.28390596745027
- type: f1
value: 38.33899250561289
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 70.00336247478144
- type: f1
value: 68.72041942191649
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.0268997982515
- type: f1
value: 75.29844481506652
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 30.327566856300813
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 28.01650210863619
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.11041256752524
- type: mrr
value: 32.14172939750204
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.527
- type: map_at_10
value: 9.283
- type: map_at_100
value: 11.995000000000001
- type: map_at_1000
value: 13.33
- type: map_at_3
value: 6.223
- type: map_at_5
value: 7.68
- type: ndcg_at_1
value: 36.223
- type: ndcg_at_10
value: 28.255999999999997
- type: ndcg_at_100
value: 26.355
- type: ndcg_at_1000
value: 35.536
- type: ndcg_at_3
value: 31.962000000000003
- type: ndcg_at_5
value: 30.61
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 21.889
- type: precision_at_100
value: 7.1080000000000005
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 30.857
- type: precision_at_5
value: 27.307
- type: recall_at_1
value: 3.527
- type: recall_at_10
value: 14.015
- type: recall_at_100
value: 28.402
- type: recall_at_1000
value: 59.795
- type: recall_at_3
value: 7.5969999999999995
- type: recall_at_5
value: 10.641
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 11.631
- type: map_at_10
value: 19.532
- type: map_at_100
value: 20.821
- type: map_at_1000
value: 20.910999999999998
- type: map_at_3
value: 16.597
- type: map_at_5
value: 18.197
- type: ndcg_at_1
value: 13.413
- type: ndcg_at_10
value: 24.628
- type: ndcg_at_100
value: 30.883
- type: ndcg_at_1000
value: 33.216
- type: ndcg_at_3
value: 18.697
- type: ndcg_at_5
value: 21.501
- type: precision_at_1
value: 13.413
- type: precision_at_10
value: 4.571
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 8.845
- type: precision_at_5
value: 6.889000000000001
- type: recall_at_1
value: 11.631
- type: recall_at_10
value: 38.429
- type: recall_at_100
value: 67.009
- type: recall_at_1000
value: 84.796
- type: recall_at_3
value: 22.74
- type: recall_at_5
value: 29.266
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 66.64
- type: map_at_10
value: 80.394
- type: map_at_100
value: 81.099
- type: map_at_1000
value: 81.122
- type: map_at_3
value: 77.289
- type: map_at_5
value: 79.25999999999999
- type: ndcg_at_1
value: 76.85
- type: ndcg_at_10
value: 84.68
- type: ndcg_at_100
value: 86.311
- type: ndcg_at_1000
value: 86.49900000000001
- type: ndcg_at_3
value: 81.295
- type: ndcg_at_5
value: 83.199
- type: precision_at_1
value: 76.85
- type: precision_at_10
value: 12.928999999999998
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.557
- type: precision_at_5
value: 23.576
- type: recall_at_1
value: 66.64
- type: recall_at_10
value: 93.059
- type: recall_at_100
value: 98.922
- type: recall_at_1000
value: 99.883
- type: recall_at_3
value: 83.49499999999999
- type: recall_at_5
value: 88.729
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 42.17131361041068
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 48.01815621479994
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.198
- type: map_at_10
value: 7.550999999999999
- type: map_at_100
value: 9.232
- type: map_at_1000
value: 9.51
- type: map_at_3
value: 5.2940000000000005
- type: map_at_5
value: 6.343999999999999
- type: ndcg_at_1
value: 15.8
- type: ndcg_at_10
value: 13.553999999999998
- type: ndcg_at_100
value: 20.776
- type: ndcg_at_1000
value: 26.204
- type: ndcg_at_3
value: 12.306000000000001
- type: ndcg_at_5
value: 10.952
- type: precision_at_1
value: 15.8
- type: precision_at_10
value: 7.180000000000001
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.307
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 9.62
- type: recall_at_1
value: 3.198
- type: recall_at_10
value: 14.575
- type: recall_at_100
value: 35.758
- type: recall_at_1000
value: 62.317
- type: recall_at_3
value: 6.922000000000001
- type: recall_at_5
value: 9.767000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 84.5217161312271
- type: cos_sim_spearman
value: 79.58562467776268
- type: euclidean_pearson
value: 76.69364353942403
- type: euclidean_spearman
value: 74.68959282070473
- type: manhattan_pearson
value: 76.81159265133732
- type: manhattan_spearman
value: 74.7519444048176
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 83.70403706922605
- type: cos_sim_spearman
value: 74.28502198729447
- type: euclidean_pearson
value: 83.32719404608066
- type: euclidean_spearman
value: 75.92189433460788
- type: manhattan_pearson
value: 83.35841543005293
- type: manhattan_spearman
value: 75.94458615451978
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 84.94127878986795
- type: cos_sim_spearman
value: 85.35148434923192
- type: euclidean_pearson
value: 81.71127467071571
- type: euclidean_spearman
value: 82.88240481546771
- type: manhattan_pearson
value: 81.72826221967252
- type: manhattan_spearman
value: 82.90725064625128
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 83.1474704168523
- type: cos_sim_spearman
value: 79.20612995350827
- type: euclidean_pearson
value: 78.85993329596555
- type: euclidean_spearman
value: 78.91956572744715
- type: manhattan_pearson
value: 78.89999720522347
- type: manhattan_spearman
value: 78.93956842550107
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 84.81255514055894
- type: cos_sim_spearman
value: 85.5217140762934
- type: euclidean_pearson
value: 82.15024353784499
- type: euclidean_spearman
value: 83.04155334389833
- type: manhattan_pearson
value: 82.18598945053624
- type: manhattan_spearman
value: 83.07248357693301
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 80.63248465157822
- type: cos_sim_spearman
value: 82.53853238521991
- type: euclidean_pearson
value: 78.33936863828221
- type: euclidean_spearman
value: 79.16305579487414
- type: manhattan_pearson
value: 78.3888359870894
- type: manhattan_spearman
value: 79.18504473136467
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 90.09066290639687
- type: cos_sim_spearman
value: 90.43893699357069
- type: euclidean_pearson
value: 82.39520777222396
- type: euclidean_spearman
value: 81.23948185395952
- type: manhattan_pearson
value: 82.35529784653383
- type: manhattan_spearman
value: 81.12681522483975
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 63.52752323046846
- type: cos_sim_spearman
value: 63.19719780439462
- type: euclidean_pearson
value: 58.29085490641428
- type: euclidean_spearman
value: 58.975178656335046
- type: manhattan_pearson
value: 58.183542772416985
- type: manhattan_spearman
value: 59.190630462178994
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 85.45100366635687
- type: cos_sim_spearman
value: 85.66816193002651
- type: euclidean_pearson
value: 81.87976731329091
- type: euclidean_spearman
value: 82.01382867690964
- type: manhattan_pearson
value: 81.88260155706726
- type: manhattan_spearman
value: 82.05258597906492
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 77.53549990038017
- type: mrr
value: 93.37474163454556
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 31.167
- type: map_at_10
value: 40.778
- type: map_at_100
value: 42.063
- type: map_at_1000
value: 42.103
- type: map_at_3
value: 37.12
- type: map_at_5
value: 39.205
- type: ndcg_at_1
value: 33.667
- type: ndcg_at_10
value: 46.662
- type: ndcg_at_100
value: 51.995999999999995
- type: ndcg_at_1000
value: 53.254999999999995
- type: ndcg_at_3
value: 39.397999999999996
- type: ndcg_at_5
value: 42.934
- type: precision_at_1
value: 33.667
- type: precision_at_10
value: 7.1
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 16.111
- type: precision_at_5
value: 11.600000000000001
- type: recall_at_1
value: 31.167
- type: recall_at_10
value: 63.744
- type: recall_at_100
value: 87.156
- type: recall_at_1000
value: 97.556
- type: recall_at_3
value: 44.0
- type: recall_at_5
value: 52.556000000000004
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.55148514851486
- type: cos_sim_ap
value: 80.535236573428
- type: cos_sim_f1
value: 75.01331912626532
- type: cos_sim_precision
value: 80.27366020524515
- type: cos_sim_recall
value: 70.39999999999999
- type: dot_accuracy
value: 99.04851485148515
- type: dot_ap
value: 28.505358821499726
- type: dot_f1
value: 36.36363636363637
- type: dot_precision
value: 37.160751565762006
- type: dot_recall
value: 35.6
- type: euclidean_accuracy
value: 99.4990099009901
- type: euclidean_ap
value: 74.95819047075476
- type: euclidean_f1
value: 71.15489874110564
- type: euclidean_precision
value: 78.59733978234583
- type: euclidean_recall
value: 65.0
- type: manhattan_accuracy
value: 99.50198019801981
- type: manhattan_ap
value: 75.02070096015086
- type: manhattan_f1
value: 71.20535714285712
- type: manhattan_precision
value: 80.55555555555556
- type: manhattan_recall
value: 63.800000000000004
- type: max_accuracy
value: 99.55148514851486
- type: max_ap
value: 80.535236573428
- type: max_f1
value: 75.01331912626532
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 54.13314692311623
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 31.115181648287145
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.771112666694336
- type: mrr
value: 45.30415764790765
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 30.849429597669374
- type: cos_sim_spearman
value: 30.384175038360194
- type: dot_pearson
value: 29.030383429536823
- type: dot_spearman
value: 28.03273624951732
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.19499999999999998
- type: map_at_10
value: 1.0959999999999999
- type: map_at_100
value: 5.726
- type: map_at_1000
value: 13.611999999999998
- type: map_at_3
value: 0.45399999999999996
- type: map_at_5
value: 0.67
- type: ndcg_at_1
value: 71.0
- type: ndcg_at_10
value: 55.352999999999994
- type: ndcg_at_100
value: 40.797
- type: ndcg_at_1000
value: 35.955999999999996
- type: ndcg_at_3
value: 63.263000000000005
- type: ndcg_at_5
value: 60.14000000000001
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 56.99999999999999
- type: precision_at_100
value: 41.199999999999996
- type: precision_at_1000
value: 16.154
- type: precision_at_3
value: 66.667
- type: precision_at_5
value: 62.8
- type: recall_at_1
value: 0.19499999999999998
- type: recall_at_10
value: 1.3639999999999999
- type: recall_at_100
value: 9.317
- type: recall_at_1000
value: 33.629999999999995
- type: recall_at_3
value: 0.49300000000000005
- type: recall_at_5
value: 0.756
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 1.335
- type: map_at_10
value: 6.293
- type: map_at_100
value: 10.928
- type: map_at_1000
value: 12.359
- type: map_at_3
value: 3.472
- type: map_at_5
value: 4.935
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 16.178
- type: ndcg_at_100
value: 28.149
- type: ndcg_at_1000
value: 39.845000000000006
- type: ndcg_at_3
value: 19.171
- type: ndcg_at_5
value: 17.864
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 14.49
- type: precision_at_100
value: 6.306000000000001
- type: precision_at_1000
value: 1.3860000000000001
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 18.367
- type: recall_at_1
value: 1.335
- type: recall_at_10
value: 10.825999999999999
- type: recall_at_100
value: 39.251000000000005
- type: recall_at_1000
value: 74.952
- type: recall_at_3
value: 4.9110000000000005
- type: recall_at_5
value: 7.312
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 69.93339999999999
- type: ap
value: 13.87476602492533
- type: f1
value: 53.867357615848555
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 62.43916242218449
- type: f1
value: 62.870386304954685
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 37.202082549859796
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.65023544137807
- type: cos_sim_ap
value: 65.99787692764193
- type: cos_sim_f1
value: 62.10650887573965
- type: cos_sim_precision
value: 56.30901287553648
- type: cos_sim_recall
value: 69.23482849604221
- type: dot_accuracy
value: 79.10830303391549
- type: dot_ap
value: 48.80109642320246
- type: dot_f1
value: 51.418744625967314
- type: dot_precision
value: 40.30253107683091
- type: dot_recall
value: 71.00263852242745
- type: euclidean_accuracy
value: 82.45812719794957
- type: euclidean_ap
value: 60.09969493259607
- type: euclidean_f1
value: 57.658573789246226
- type: euclidean_precision
value: 55.62913907284768
- type: euclidean_recall
value: 59.84168865435356
- type: manhattan_accuracy
value: 82.46408773916671
- type: manhattan_ap
value: 60.116199786815116
- type: manhattan_f1
value: 57.683903860160235
- type: manhattan_precision
value: 53.41726618705036
- type: manhattan_recall
value: 62.69129287598945
- type: max_accuracy
value: 83.65023544137807
- type: max_ap
value: 65.99787692764193
- type: max_f1
value: 62.10650887573965
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34943920518494
- type: cos_sim_ap
value: 84.5428891020442
- type: cos_sim_f1
value: 77.09709933923172
- type: cos_sim_precision
value: 74.83150952967607
- type: cos_sim_recall
value: 79.50415768401602
- type: dot_accuracy
value: 84.53448208949432
- type: dot_ap
value: 73.96328242371995
- type: dot_f1
value: 70.00553786515299
- type: dot_precision
value: 63.58777665995976
- type: dot_recall
value: 77.86418232214352
- type: euclidean_accuracy
value: 86.87662514068381
- type: euclidean_ap
value: 81.45499631520235
- type: euclidean_f1
value: 73.46567109816063
- type: euclidean_precision
value: 69.71037533697381
- type: euclidean_recall
value: 77.6485987064983
- type: manhattan_accuracy
value: 86.88244654014825
- type: manhattan_ap
value: 81.47180273946366
- type: manhattan_f1
value: 73.44624393136418
- type: manhattan_precision
value: 70.80385852090032
- type: manhattan_recall
value: 76.29350169387126
- type: max_accuracy
value: 88.34943920518494
- type: max_ap
value: 84.5428891020442
- type: max_f1
value: 77.09709933923172
---
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
lewtun/my-awesome-setfit-model-3 | lewtun | 2022-10-03T09:06:25Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-10-03T09:06:17Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tkubotake/distilbert-base-uncased-finetuned-emotion | tkubotake | 2022-10-03T08:25:34Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-28T05:01:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.918
- name: F1
type: f1
value: 0.9179456491632857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2263
- Accuracy: 0.918
- F1: 0.9179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8566 | 1.0 | 250 | 0.3283 | 0.903 | 0.9002 |
| 0.2607 | 2.0 | 500 | 0.2263 | 0.918 | 0.9179 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
PartiallyTyped/answerable_tydiqa_lm_pretrained_finnish | PartiallyTyped | 2022-10-03T07:09:57Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text generation",
"fi",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-02T14:52:13Z | ---
language:
- fi
tags:
- text generation
license: mit
datasets:
- answerable tydiqa
---
# ReadMe
This is a pretrained model based on [Finnish-NLP/gpt2-finnish](https://huggingface.co/Finnish-NLP/gpt2-finnish) that has been trained on [copenlu/answerable_tydiqa](https://huggingface.co/datasets/copenlu/answerable_tydiqa), specifically the text field of the Finnish samples for 2 epochs.
To use the pretrained head, use: `AutoModelForCausalLM.from_pretrained`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PartiallyTyped/answerable_tydiqa_lm_pretrained_finnish"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
``` |
krm/my_exercice_mrpc | krm | 2022-10-03T06:47:03Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-03T06:15:31Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: krm/my_exercice_mrpc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# krm/my_exercice_mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6942
- Train Accuracy: 0.6200
- Validation Loss: 0.6486
- Validation Accuracy: 0.6838
- Epoch: 2
## Model description
Ce modèle n'est pas à utiliser. Il s'agit d'un petit essai.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6860 | 0.6314 | 0.7727 | 0.6838 | 0 |
| 0.6862 | 0.6347 | 0.6326 | 0.6838 | 1 |
| 0.6942 | 0.6200 | 0.6486 | 0.6838 | 2 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/liminalspaces | sd-concepts-library | 2022-10-03T04:23:32Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-03T04:23:28Z | ---
license: mit
---
### Liminalspaces on Stable Diffusion
This is the `<liminal image>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






|
g30rv17ys/ddpm-geeve-cnv-1500-300ep | g30rv17ys | 2022-10-03T01:51:32Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-02T16:17:04Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1500-300ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1500-300ep/tensorboard?#scalars)
|
Arman511/CatVsDog | Arman511 | 2022-10-03T00:02:30Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2022-10-03T00:01:08Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
din0s/t5-small-fr-finetuned-en-to-it | din0s | 2022-10-02T22:46:16Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-02T22:08:17Z | ---
tags:
- generated_from_trainer
datasets:
- ccmatrix
metrics:
- bleu
model-index:
- name: t5-small_fr-finetuned-en-to-it
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ccmatrix
type: ccmatrix
config: en-it
split: train[3000:12000]
args: en-it
metrics:
- name: Bleu
type: bleu
value: 7.4222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_fr-finetuned-en-to-it
This model is a fine-tuned version of [din0s/t5-small-finetuned-en-to-fr](https://huggingface.co/din0s/t5-small-finetuned-en-to-fr) on the ccmatrix dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3225
- Bleu: 7.4222
- Gen Len: 59.1127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 94 | 3.0406 | 3.2546 | 52.6127 |
| No log | 2.0 | 188 | 2.9278 | 3.1206 | 62.774 |
| No log | 3.0 | 282 | 2.8573 | 3.4206 | 63.6707 |
| No log | 4.0 | 376 | 2.8030 | 3.4847 | 66.408 |
| No log | 5.0 | 470 | 2.7602 | 3.8933 | 64.362 |
| 3.2982 | 6.0 | 564 | 2.7185 | 3.9298 | 66.058 |
| 3.2982 | 7.0 | 658 | 2.6842 | 4.0344 | 65.5773 |
| 3.2982 | 8.0 | 752 | 2.6536 | 4.3243 | 65.0047 |
| 3.2982 | 9.0 | 846 | 2.6233 | 4.5078 | 64.5813 |
| 3.2982 | 10.0 | 940 | 2.5966 | 4.6657 | 63.654 |
| 2.9837 | 11.0 | 1034 | 2.5743 | 4.7664 | 63.326 |
| 2.9837 | 12.0 | 1128 | 2.5526 | 4.9535 | 62.7327 |
| 2.9837 | 13.0 | 1222 | 2.5303 | 5.1386 | 63.5887 |
| 2.9837 | 14.0 | 1316 | 2.5122 | 5.1037 | 64.1667 |
| 2.9837 | 15.0 | 1410 | 2.4937 | 5.3304 | 63.116 |
| 2.8416 | 16.0 | 1504 | 2.4797 | 5.5006 | 61.4953 |
| 2.8416 | 17.0 | 1598 | 2.4627 | 5.5892 | 62.01 |
| 2.8416 | 18.0 | 1692 | 2.4497 | 5.8497 | 61.42 |
| 2.8416 | 19.0 | 1786 | 2.4372 | 6.0074 | 61.1587 |
| 2.8416 | 20.0 | 1880 | 2.4256 | 6.1464 | 60.522 |
| 2.8416 | 21.0 | 1974 | 2.4148 | 6.3117 | 59.5567 |
| 2.7428 | 22.0 | 2068 | 2.4039 | 6.4626 | 59.532 |
| 2.7428 | 23.0 | 2162 | 2.3939 | 6.5287 | 60.2307 |
| 2.7428 | 24.0 | 2256 | 2.3857 | 6.6093 | 60.22 |
| 2.7428 | 25.0 | 2350 | 2.3772 | 6.8004 | 59.396 |
| 2.7428 | 26.0 | 2444 | 2.3703 | 6.9433 | 59.5027 |
| 2.6779 | 27.0 | 2538 | 2.3631 | 7.0153 | 59.1433 |
| 2.6779 | 28.0 | 2632 | 2.3575 | 7.1783 | 58.9793 |
| 2.6779 | 29.0 | 2726 | 2.3514 | 7.1639 | 59.362 |
| 2.6779 | 30.0 | 2820 | 2.3457 | 7.2176 | 58.9927 |
| 2.6779 | 31.0 | 2914 | 2.3411 | 7.2599 | 59.1433 |
| 2.6335 | 32.0 | 3008 | 2.3374 | 7.284 | 59.1787 |
| 2.6335 | 33.0 | 3102 | 2.3339 | 7.3678 | 59.07 |
| 2.6335 | 34.0 | 3196 | 2.3307 | 7.3364 | 58.9813 |
| 2.6335 | 35.0 | 3290 | 2.3281 | 7.3318 | 58.96 |
| 2.6335 | 36.0 | 3384 | 2.3259 | 7.394 | 59.0787 |
| 2.6335 | 37.0 | 3478 | 2.3245 | 7.4133 | 59.0393 |
| 2.609 | 38.0 | 3572 | 2.3232 | 7.383 | 59.1887 |
| 2.609 | 39.0 | 3666 | 2.3227 | 7.4105 | 59.1227 |
| 2.609 | 40.0 | 3760 | 2.3225 | 7.4222 | 59.1127 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
din0s/t5-small-de-finetuned-en-to-it | din0s | 2022-10-02T22:43:18Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-02T22:01:53Z | ---
tags:
- generated_from_trainer
datasets:
- ccmatrix
metrics:
- bleu
model-index:
- name: t5-small_de-finetuned-en-to-it
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ccmatrix
type: ccmatrix
config: en-it
split: train[3000:12000]
args: en-it
metrics:
- name: Bleu
type: bleu
value: 6.7338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_de-finetuned-en-to-it
This model is a fine-tuned version of [din0s/t5-small-finetuned-en-to-de](https://huggingface.co/din0s/t5-small-finetuned-en-to-de) on the ccmatrix dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3480
- Bleu: 6.7338
- Gen Len: 61.3273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 94 | 3.1064 | 2.9057 | 47.5067 |
| No log | 2.0 | 188 | 2.9769 | 2.7484 | 76.9273 |
| No log | 3.0 | 282 | 2.9015 | 3.0624 | 79.8873 |
| No log | 4.0 | 376 | 2.8444 | 3.2959 | 78.276 |
| No log | 5.0 | 470 | 2.7989 | 3.6694 | 74.6013 |
| 3.3505 | 6.0 | 564 | 2.7564 | 3.8098 | 74.3247 |
| 3.3505 | 7.0 | 658 | 2.7212 | 3.9596 | 72.554 |
| 3.3505 | 8.0 | 752 | 2.6886 | 4.2231 | 70.7673 |
| 3.3505 | 9.0 | 846 | 2.6572 | 4.1466 | 72.0113 |
| 3.3505 | 10.0 | 940 | 2.6294 | 4.2696 | 71.1647 |
| 3.0254 | 11.0 | 1034 | 2.6064 | 4.6375 | 67.7707 |
| 3.0254 | 12.0 | 1128 | 2.5838 | 4.7208 | 68.6707 |
| 3.0254 | 13.0 | 1222 | 2.5614 | 4.9191 | 68.5767 |
| 3.0254 | 14.0 | 1316 | 2.5427 | 4.9837 | 66.3867 |
| 3.0254 | 15.0 | 1410 | 2.5241 | 5.1011 | 66.7667 |
| 2.8789 | 16.0 | 1504 | 2.5093 | 5.283 | 64.944 |
| 2.8789 | 17.0 | 1598 | 2.4919 | 5.3205 | 65.738 |
| 2.8789 | 18.0 | 1692 | 2.4788 | 5.3046 | 65.3207 |
| 2.8789 | 19.0 | 1786 | 2.4651 | 5.5282 | 64.9407 |
| 2.8789 | 20.0 | 1880 | 2.4532 | 5.6745 | 63.0873 |
| 2.8789 | 21.0 | 1974 | 2.4419 | 5.7073 | 63.4973 |
| 2.7782 | 22.0 | 2068 | 2.4308 | 5.8513 | 62.8813 |
| 2.7782 | 23.0 | 2162 | 2.4209 | 5.8267 | 64.1033 |
| 2.7782 | 24.0 | 2256 | 2.4124 | 5.8534 | 64.2993 |
| 2.7782 | 25.0 | 2350 | 2.4037 | 6.0406 | 63.8313 |
| 2.7782 | 26.0 | 2444 | 2.3964 | 6.1517 | 63.4213 |
| 2.7116 | 27.0 | 2538 | 2.3897 | 6.2175 | 63.0573 |
| 2.7116 | 28.0 | 2632 | 2.3836 | 6.2551 | 62.876 |
| 2.7116 | 29.0 | 2726 | 2.3777 | 6.4412 | 62.4167 |
| 2.7116 | 30.0 | 2820 | 2.3717 | 6.4604 | 62.1087 |
| 2.7116 | 31.0 | 2914 | 2.3673 | 6.5471 | 62.1373 |
| 2.6662 | 32.0 | 3008 | 2.3634 | 6.5296 | 62.2533 |
| 2.6662 | 33.0 | 3102 | 2.3596 | 6.6623 | 61.276 |
| 2.6662 | 34.0 | 3196 | 2.3564 | 6.6591 | 61.392 |
| 2.6662 | 35.0 | 3290 | 2.3539 | 6.7201 | 61.0827 |
| 2.6662 | 36.0 | 3384 | 2.3516 | 6.675 | 61.3173 |
| 2.6662 | 37.0 | 3478 | 2.3500 | 6.6894 | 61.3507 |
| 2.6411 | 38.0 | 3572 | 2.3488 | 6.6539 | 61.5253 |
| 2.6411 | 39.0 | 3666 | 2.3482 | 6.7135 | 61.3733 |
| 2.6411 | 40.0 | 3760 | 2.3480 | 6.7338 | 61.3273 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
ChrisC1657/Aqua_tests | ChrisC1657 | 2022-10-02T20:33:02Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-02T17:06:56Z | ---
license: mit
---
Aqua_anime_girl_blk_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - black reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: Anime girl
Aqua_animegirl_wrd_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - Waifu Research Department reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: Anime girl
Aqua_CxRwMwCYViVz2JUH_blk_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - black reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: CxRwMwCYViVz2JUH
Aqua_CxRwMwCYViVz2JUH_wrd_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - Waifu Research Department reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: CxRwMwCYViVz2JUH |
waifu-research-department/Natsuki-Subaru | waifu-research-department | 2022-10-02T20:26:18Z | 0 | 4 | null | [
"region:us"
]
| null | 2022-10-02T19:22:45Z | # Description
Trainer: naotsue
Natsuki Subaru from Re:Zero
# Dataset
>Training: 20 images
>Regularization: 300 images
# Info
>Model Used: Waifu Diffusion 1.3 (Epoch 6)
>Steps: 3000
>Keyword: SUBARU (Use this in the prompt)
>Class Phrase: 1boy
 |
ericntay/stbl_clinical_bert_ft_rs9 | ericntay | 2022-10-02T19:02:19Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-02T18:44:01Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs9
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0866
- F1: 0.9109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2712 | 1.0 | 101 | 0.0879 | 0.8420 |
| 0.0666 | 2.0 | 202 | 0.0776 | 0.8726 |
| 0.031 | 3.0 | 303 | 0.0630 | 0.8923 |
| 0.015 | 4.0 | 404 | 0.0821 | 0.8958 |
| 0.0087 | 5.0 | 505 | 0.0736 | 0.9084 |
| 0.0061 | 6.0 | 606 | 0.0738 | 0.9083 |
| 0.0037 | 7.0 | 707 | 0.0838 | 0.9157 |
| 0.0027 | 8.0 | 808 | 0.0827 | 0.9088 |
| 0.0017 | 9.0 | 909 | 0.0850 | 0.9135 |
| 0.0014 | 10.0 | 1010 | 0.0871 | 0.9090 |
| 0.001 | 11.0 | 1111 | 0.0868 | 0.9094 |
| 0.001 | 12.0 | 1212 | 0.0866 | 0.9109 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Subsets and Splits