modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
teacookies/autonlp-roberta-base-squad2-24465516 | teacookies | 2021-10-22T08:21:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 65.5797497320557
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465516
- CO2 Emissions (in grams): 65.5797497320557
## Validation Metrics
- Loss: 0.6545609831809998
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465516
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465524 | teacookies | 2021-10-22T08:14:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 58.51753681929935
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465524
- CO2 Emissions (in grams): 58.51753681929935
## Validation Metrics
- Loss: 0.5759999752044678
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465524
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465519 | teacookies | 2021-10-22T08:13:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 58.19097299648645
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465519
- CO2 Emissions (in grams): 58.19097299648645
## Validation Metrics
- Loss: 0.566668689250946
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465519
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465519", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465519", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465523 | teacookies | 2021-10-22T08:13:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 56.99866929988893
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465523
- CO2 Emissions (in grams): 56.99866929988893
## Validation Metrics
- Loss: 0.5468788146972656
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465523
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465523", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465515 | teacookies | 2021-10-22T08:11:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 56.45146749922553
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465515
- CO2 Emissions (in grams): 56.45146749922553
## Validation Metrics
- Loss: 0.5932255387306213
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465515
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
Gigworks/ASR_id | Gigworks | 2021-10-22T07:28:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | # Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned: facebook/wav2vec2-large-xlsr-53 |
aditeyabaral/sentencetransformer-distilbert-base-cased | aditeyabaral | 2021-10-21T22:30:29Z | 129 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-base-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-base-cased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pritoms/distilgpt2-finetuned-wikitext2 | pritoms | 2021-10-21T21:16:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 130 | 3.1733 |
| No log | 2.0 | 260 | 3.0756 |
| No log | 3.0 | 390 | 3.0540 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
JonatanGk/roberta-base-bne-finetuned-sqac | JonatanGk | 2021-10-21T21:06:47Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:sqac",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: roberta-base-bne-finetuned-sqac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9924 | 1.0 | 1196 | 0.8670 |
| 0.474 | 2.0 | 2392 | 0.8923 |
| 0.1637 | 3.0 | 3588 | 1.2066 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/darthvivien | huggingtweets | 2021-10-21T20:49:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/darthvivien/1634849358388/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425505571503886339/1ikaFh5K_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">vvn</div>
<div style="text-align: center; font-size: 14px;">@darthvivien</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from vvn.
| Data | vvn |
| --- | --- |
| Tweets downloaded | 3175 |
| Retweets | 460 |
| Short tweets | 114 |
| Tweets kept | 2601 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ple9op7w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @darthvivien's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pt4wq49) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pt4wq49/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/darthvivien')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/degg-dril-fred_delicious | huggingtweets | 2021-10-21T19:39:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/degg-dril-fred_delicious/1634845142916/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/58546628/goat22_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/726824334002638848/BEZFr1k8_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & deg & Fred Delicious</div>
<div style="text-align: center; font-size: 14px;">@degg-dril-fred_delicious</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & deg & Fred Delicious.
| Data | wint | deg | Fred Delicious |
| --- | --- | --- | --- |
| Tweets downloaded | 3227 | 3152 | 3235 |
| Retweets | 473 | 142 | 429 |
| Short tweets | 318 | 42 | 398 |
| Tweets kept | 2436 | 2968 | 2408 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1mwoed1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @degg-dril-fred_delicious's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1a691ucn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1a691ucn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/degg-dril-fred_delicious')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lewtun/xlm-roberta-base-finetuned-marc-en | lewtun | 2021-10-21T18:53:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8850
- Mae: 0.4390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1589 | 1.0 | 235 | 0.9769 | 0.5122 |
| 0.974 | 2.0 | 470 | 0.8850 | 0.4390 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
aditeyabaral/sentencetransformer-roberta-base | aditeyabaral | 2021-10-21T18:03:26Z | 5 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
patrickvonplaten/unispeech-sat-large-timit-ft | patrickvonplaten | 2021-10-21T16:38:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-sat-large-timit-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-large-timit-ft
This model is a fine-tuned version of [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6074
- Wer: 0.3880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2516 | 0.69 | 100 | 5.8638 | 1.0 |
| 2.9596 | 1.38 | 200 | 2.9550 | 1.0 |
| 2.8831 | 2.07 | 300 | 2.8547 | 1.0 |
| 2.3223 | 2.76 | 400 | 2.2044 | 1.0063 |
| 1.2104 | 3.45 | 500 | 1.0845 | 0.7706 |
| 0.6779 | 4.14 | 600 | 0.7342 | 0.5663 |
| 0.6319 | 4.83 | 700 | 0.6054 | 0.4881 |
| 0.664 | 5.52 | 800 | 0.5808 | 0.4913 |
| 0.402 | 6.21 | 900 | 0.5647 | 0.4611 |
| 0.3176 | 6.9 | 1000 | 0.5211 | 0.4440 |
| 0.3392 | 7.59 | 1100 | 0.5187 | 0.4359 |
| 0.3888 | 8.28 | 1200 | 0.5501 | 0.4391 |
| 0.2874 | 8.97 | 1300 | 0.5249 | 0.4148 |
| 0.208 | 9.66 | 1400 | 0.5407 | 0.4152 |
| 0.1457 | 10.34 | 1500 | 0.5722 | 0.4155 |
| 0.2375 | 11.03 | 1600 | 0.5780 | 0.4059 |
| 0.2111 | 11.72 | 1700 | 0.5823 | 0.4094 |
| 0.1422 | 12.41 | 1800 | 0.5754 | 0.3977 |
| 0.125 | 13.1 | 1900 | 0.5784 | 0.4031 |
| 0.1996 | 13.79 | 2000 | 0.5630 | 0.3956 |
| 0.1747 | 14.48 | 2100 | 0.5880 | 0.3964 |
| 0.1263 | 15.17 | 2200 | 0.5987 | 0.3951 |
| 0.11 | 15.86 | 2300 | 0.5688 | 0.3964 |
| 0.1411 | 16.55 | 2400 | 0.6223 | 0.3906 |
| 0.1647 | 17.24 | 2500 | 0.6135 | 0.3960 |
| 0.1162 | 17.93 | 2600 | 0.6224 | 0.3960 |
| 0.098 | 18.62 | 2700 | 0.6017 | 0.3907 |
| 0.1183 | 19.31 | 2800 | 0.6121 | 0.3885 |
| 0.1717 | 20.0 | 2900 | 0.6074 | 0.3880 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
abhishek/autonlp-hindi-question-answering-23865268 | abhishek | 2021-10-21T13:51:44Z | 14 | 5 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"autonlp",
"hi",
"dataset:abhishek/autonlp-data-hindi-question-answering",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
- question-answering
language: hi
widget:
- text: "ยดเคธเคคเฅเคถ เคงเคตเคจ เค
เคเคคเคฐเคฟเคเฅเคท เคเฅเคเคฆเฅเคฐยด เคเคฟเคธ เคฐเคพเคเฅเคฏ เคฎเฅเค เคธเฅเคฅเคฟเคค เคนเฅ?"
context: "เคธเคคเฅเคถ เคงเคตเคจ เค
เคเคคเคฐเคฟเคเฅเคท เคเฅเคเคฆเฅเคฐ, เคญเคพเคฐเคคเฅเคฏ เค
เคเคคเคฐเคฟเคเฅเคท เค
เคจเฅเคธเคเคงเคพเคจ เคธเคเคเค เคจ (เคเคธเคฐเฅ) เคเคพ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคเฅเคเคฆเฅเคฐ เคนเฅเฅค เคฏเคน เคเคเคงเฅเคฐ เคชเฅเคฐเคฆเฅเคถ เคเฅ เคถเฅเคฐเฅเคนเคฐเฅเคเฅเคเคพ เคฎเฅเค เคธเฅเคฅเคฟเคค เคนเฅ, เคเคธเฅ 'เคถเฅเคฐเฅเคนเคฐเฅเคเฅเคเคพ เคฐเฅเคเค' เคฏเคพ 'เคถเฅเคฐเฅเคนเคฐเฅเคเฅเคเคพ เคฒเคพเคเคเคฟเคเค เคฐเฅเคเค' เคเฅ เคจเคพเคฎ เคธเฅ เคญเฅ เคเคพเคจเคพ เคเคพเคคเคพ เคนเฅเฅค 2002 เคฎเฅเค เคเคธเคฐเฅ เคเฅ เคชเฅเคฐเฅเคต เคชเฅเคฐเคฌเคเคงเค เคเคฐ เคตเฅเคเฅเคเคพเคจเคฟเค เคธเคคเฅเคถ เคงเคตเคจ เคเฅ เคฎเคฐเคฃเฅเคชเคฐเคพเคเคค เคเคจเคเฅ เคธเคฎเฅเคฎเคพเคจ เคฎเฅเค เคเคธเคเคพ เคจเคพเคฎ เคฌเคฆเคฒเคพ เคเคฏเคพเฅค เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเฅ เค
เคธเฅเคฎเฅ\u200dเคฌเคฒเฅ เคเฅ เคฒเคฟเค เคฆเฅเคธเคฐเคพ เคญเคตเคจ เคเฅเคจเฅ\u200dเคฆเฅเคฐเฅเคฏ เคฎเคเคคเฅเคฐเคฟเคฎเคเคกเคฒ เคจเฅ 12 เคธเคฟเคคเคฎเฅ\u200dเคฌเคฐ, 2013 เคเฅ เคธเคคเฅเคถ เคงเคตเคจ เค
เคเคคเคฐเคฟเคเฅเคท เคเฅเคจเฅ\u200dเคฆเฅเคฐ, เคถเฅเคฐเฅเคนเคฐเคฟเคเฅเคเคพ เคฎเฅเค เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเฅ เค
เคธเฅเคฎเฅ\u200dเคฌเคฒเฅ เคเฅ เคฒเคฟเค เคฆเฅเคธเคฐเฅ เคญเคตเคจ เคเฅ เคจเคฟเคฐเฅเคฎเคพเคฃ เคเฅ เคฎเคเคเฅเคฐเฅ เคฆเฅเฅค เคเคธ เคชเคฐ 363.95 เคเคฐเฅเคกเคผ เคฐเฅเคชเคฏเฅ เคเฅ เค
เคจเฅเคฎเคพเคจเคฟเคค เคฒเคพเคเคค เคเคเคเฅ, เคเคฟเคธเคฎเฅเค เคธเคพเคค เคเคฐเฅเคกเคผ เคฐเฅเคชเคฏเฅ เคเคพ เคเคฐเฅเค เคตเคฟเคฆเฅเคถเฅ เคฎเฅเคฆเฅเคฐเคพ เคฎเฅเค เคนเฅเคเคพเฅค เคเคธ เคฆเฅเคธเคฐเฅ เคฌเคฟเคฒเฅเคกเคฟเคเค เคเฅ เคเคชเคฒเคฌเฅ\u200dเคง เคนเฅ เคเคพเคจเฅ เคธเฅ เคชเฅเคเคธเคเคฒเคตเฅ เคเคฐ เคเฅเคเคธเคเคฒเคตเฅ เคเฅ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคซเฅเคฐเฅเคเฅเคตเฅเคเคธเฅ เคฌเคขเคผเฅเคเฅเฅค เคฏเคน เคเฅเคเคธเคเคฒเคตเฅ เคเคฎเคเฅ-III เคเฅ เคเคเฅเคเคฐเคฃ เคเฅ เคฒเคฟเค เคตเคฐเฅเคคเคฎเคพเคจ เคตเฅ\u200dเคนเฅเคเคฒ เค
เคธเฅเคฎเฅ\u200dเคฌเคฒเฅ เคฌเคฟเคฒเฅเคกเคฟเคเค เคเฅ เค
เคคเคฟเคฐเคฟเคเฅ\u200dเคค เคธเฅเคตเคฟเคงเคพ เคฎเฅเคนเฅเคฏเคพ เคเคฐเคพเคฏเฅเคเฅเฅค เคคเฅเคธเคฐเฅ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคชเฅเคก เคคเคฅเคพ เคญเคตเคฟเคทเฅ\u200dเคฏ เคฎเฅเค เคธเคพเคฎเคพเคจเฅ\u200dเคฏ เคฏเคพเคจ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคเฅ เคฒเคฟเค เคญเฅ เคเคธเคธเฅ เคเคพเคซเฅ เคธเฅเคตเคฟเคงเคพ เคฎเคฟเคฒเฅเคเฅเฅค[1]\nเคฒเคพเคเค เคชเฅเคก\nเคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคฒเฅเคจเฅเค เคชเฅเคก\nเคเคธ เคฒเคพเคเค เคชเฅเคก เคธเฅ เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเคฐ เคธเคเคตเคฐเฅเคงเคฟเคค เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคเฅ เคฒเคพเคเค เคเคฟเคฏเคพ เคเคฏเคพ เคฅเคพเฅค เคฏเคน เคตเคฐเฅเคคเคฎเคพเคจ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคธเฅเคฅเคฒ เคเฅ เคฆเคเฅเคทเคฟเคฃเฅ เคธเคฟเคฐเฅ เคชเคฐ เคธเฅเคฅเคฟเคค เคนเฅเฅค เคเคธเฅ เคธเฅเคตเคพเคฎเฅเคเฅเคค เคเคฐ เคฆเคฟเคฏเคพ เคเคฏเคพ เคนเฅเฅค เคถเฅเคฐเฅ เคฎเฅเค เคเคธเฅ เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคฒเคพเคเค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคฌเคจเคพเคฏเคพ เคเคฏเคพ เคฅเคพเฅค เคฒเฅเคเคฟเคจ เคฌเคพเคฆ เคฎเฅเค เคเคธเฅ เคธเคเคตเคฐเฅเคงเคฟเคค เคเคชเคเฅเคฐเคน เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคฏเคพเคจ เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคชเคฐเคฟเคธเคฐ เคเฅ เคฐเฅเคช เคฎเฅเค เคเคธเฅเคคเฅเคฎเคพเคฒ เคเคฟเคฏเคพ เคเคฏเคพ เคฅเคพเฅค\nเคชเฅเคฐเคฅเคฎ เคฒเคพเคเค เคชเฅเคก\nเคฆเฅเคตเคฟเคคเฅเคฏ เคฒเฅเคจเฅเค เคชเฅเคก\nเคคเฅเคคเฅเคฏ เคฒเคพเคเค เคชเฅเคก\nเคธเคจเฅเคฆเคฐเฅเคญ เคถเฅเคฐเฅเคฃเฅ:เคญเคพเคฐเคคเฅเคฏ เค
เคเคคเคฐเคฟเคเฅเคท เค
เคจเฅเคธเคเคงเคพเคจ เคธเคเคเค เคจ\nเคถเฅเคฐเฅเคฃเฅ:เคญเคพเคฐเคค เคเฅ เคฐเฅเคเฅเค เคชเฅเคฐเคเฅเคทเฅเคชเคฃ เคธเฅเคฅเคฒ"
datasets:
- abhishek/autonlp-data-hindi-question-answering
co2_eq_emissions: 39.76330395590446
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- CO2 Emissions (in grams): 39.76330395590446
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-hindi-question-answering-23865268
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
tucan9389/kcbert-base-finetuned | tucan9389 | 2021-10-21T11:53:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model-index:
- name: kcbert-base-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metrics:
- name: Accuracy
type: accuracy
value: 0.8329856154606347
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert-base-finetuned
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7393
- Accuracy: 0.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4612 | 1.0 | 2855 | 0.5216 | 0.8143 |
| 0.3061 | 2.0 | 5710 | 0.5130 | 0.8248 |
| 0.2129 | 3.0 | 8565 | 0.6062 | 0.8257 |
| 0.1337 | 4.0 | 11420 | 0.7393 | 0.8330 |
| 0.0653 | 5.0 | 14275 | 0.8651 | 0.8302 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tiennvcs/distilbert-base-uncased-finetuned-infovqa | tiennvcs | 2021-10-21T11:37:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.02 | 100 | 4.7706 |
| No log | 0.05 | 200 | 4.4399 |
| No log | 0.07 | 300 | 3.8175 |
| No log | 0.09 | 400 | 3.8306 |
| 3.3071 | 0.12 | 500 | 3.6480 |
| 3.3071 | 0.14 | 600 | 3.6451 |
| 3.3071 | 0.16 | 700 | 3.4974 |
| 3.3071 | 0.19 | 800 | 3.4686 |
| 3.3071 | 0.21 | 900 | 3.4703 |
| 3.5336 | 0.23 | 1000 | 3.3165 |
| 3.5336 | 0.25 | 1100 | 3.3634 |
| 3.5336 | 0.28 | 1200 | 3.3466 |
| 3.5336 | 0.3 | 1300 | 3.3411 |
| 3.5336 | 0.32 | 1400 | 3.2456 |
| 3.3593 | 0.35 | 1500 | 3.3257 |
| 3.3593 | 0.37 | 1600 | 3.2941 |
| 3.3593 | 0.39 | 1700 | 3.2581 |
| 3.3593 | 0.42 | 1800 | 3.1680 |
| 3.3593 | 0.44 | 1900 | 3.2077 |
| 3.2436 | 0.46 | 2000 | 3.2422 |
| 3.2436 | 0.49 | 2100 | 3.2529 |
| 3.2436 | 0.51 | 2200 | 3.2681 |
| 3.2436 | 0.53 | 2300 | 3.1055 |
| 3.2436 | 0.56 | 2400 | 3.0174 |
| 3.093 | 0.58 | 2500 | 3.0608 |
| 3.093 | 0.6 | 2600 | 3.0200 |
| 3.093 | 0.63 | 2700 | 2.9884 |
| 3.093 | 0.65 | 2800 | 3.0041 |
| 3.093 | 0.67 | 2900 | 2.9700 |
| 3.0087 | 0.69 | 3000 | 3.0993 |
| 3.0087 | 0.72 | 3100 | 3.0499 |
| 3.0087 | 0.74 | 3200 | 2.9317 |
| 3.0087 | 0.76 | 3300 | 3.0817 |
| 3.0087 | 0.79 | 3400 | 3.0035 |
| 2.9694 | 0.81 | 3500 | 3.0850 |
| 2.9694 | 0.83 | 3600 | 2.9948 |
| 2.9694 | 0.86 | 3700 | 2.9874 |
| 2.9694 | 0.88 | 3800 | 2.9202 |
| 2.9694 | 0.9 | 3900 | 2.9322 |
| 2.8277 | 0.93 | 4000 | 2.9195 |
| 2.8277 | 0.95 | 4100 | 2.8638 |
| 2.8277 | 0.97 | 4200 | 2.8809 |
| 2.8277 | 1.0 | 4300 | 2.8872 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
anton-l/wav2vec2-base-finetuned-ks | anton-l | 2021-10-21T11:04:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0952
- Accuracy: 0.9823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7908 | 1.0 | 399 | 0.6776 | 0.9009 |
| 0.3202 | 2.0 | 798 | 0.2061 | 0.9763 |
| 0.221 | 3.0 | 1197 | 0.1257 | 0.9785 |
| 0.1773 | 4.0 | 1596 | 0.0990 | 0.9813 |
| 0.1729 | 5.0 | 1995 | 0.0952 | 0.9823 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
BSC-LT/roberta-large-bne-capitel-ner | BSC-LT | 2021-10-21T10:31:30Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-ner
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
## Evaluation and results
F1 Score: 0.8998
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-LT/roberta-base-bne | BSC-LT | 2021-10-21T10:30:31Z | 2,054 | 9 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"national library of spain",
"spanish",
"bne",
"es",
"dataset:bne",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
datasets:
- "bne"
metrics:
- "ppl"
widget:
- text: "Este aรฑo las campanadas de La Sexta las presentarรก <mask>."
- text: "David Broncano es un presentador de La <mask>."
- text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."
- text: "Hay base legal dentro del marco <mask> actual."
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne
# RoBERTa base trained with data from National Library of Spain (BNE)
## Model Description
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Training corpora and preprocessing
The [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text.
Some of the statistics of the corpus:
| Corpora | Number of documents | Number of tokens | Size (GB) |
|---------|---------------------|------------------|-----------|
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
## Tokenization and pre-training
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
## Evaluation and results
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-LT/roberta-base-bne-capitel-pos | BSC-LT | 2021-10-21T10:29:55Z | 27 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "pos"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
widget:
- text: "Festival de San Sebastiรกn: Johnny Depp recibirรก el premio Donostia en pleno rifirrafe judicial con Amber Heard"
- text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."
- text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."
- text: "El Tribunal Superior de Justicia se pronunciรณ ayer: \"Hay base legal dentro del marco jurรญdico actual\"."
---
**โ ๏ธNOTICEโ ๏ธ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-pos
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9846 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiรฉrrez-Fandiรฑo and Jordi Armengol-Estapรฉ and Marc Pร mies and Joan Llop-Palao and Joaquรญn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
aditeyabaral/sentencetransformer-bert-base-cased | aditeyabaral | 2021-10-21T09:50:09Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-base-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-base-cased')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-base-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-base-cased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Roberta55/deberta-base-mnli-finetuned-cola | Roberta55 | 2021-10-21T09:07:56Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: deberta-base-mnli-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6281691768918801
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-mnli-finetuned-cola
This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8205
- Matthews Correlation: 0.6282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4713 | 1.0 | 535 | 0.5110 | 0.5797 |
| 0.2678 | 2.0 | 1070 | 0.6648 | 0.5154 |
| 0.1811 | 3.0 | 1605 | 0.6681 | 0.6121 |
| 0.113 | 4.0 | 2140 | 0.8205 | 0.6282 |
| 0.0831 | 5.0 | 2675 | 1.0413 | 0.6057 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bochaowei/t5-small-finetuned-xsum-wei2 | bochaowei | 2021-10-21T07:21:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 29.2287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4131
- Rouge1: 29.2287
- Rouge2: 8.4073
- Rougel: 23.0934
- Rougelsum: 23.0954
- Gen Len: 18.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/raquelbaron__ | huggingtweets | 2021-10-21T02:55:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/raquelbaron__/1634784917653/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1384354978374950920/RwG59WAc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Raquel Baron</div>
<div style="text-align: center; font-size: 14px;">@raquelbaron__</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Raquel Baron.
| Data | Raquel Baron |
| --- | --- |
| Tweets downloaded | 120 |
| Retweets | 19 |
| Short tweets | 15 |
| Tweets kept | 86 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39wuu832/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @raquelbaron__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2cnx0lr4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2cnx0lr4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/raquelbaron__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tucan9389/distilbert-base-uncased-finetuned-cola | tucan9389 | 2021-10-21T00:28:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5308757570358055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7501
- Matthews Correlation: 0.5309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5286 | 1.0 | 535 | 0.5067 | 0.4301 |
| 0.3469 | 2.0 | 1070 | 0.5216 | 0.4802 |
| 0.2343 | 3.0 | 1605 | 0.6431 | 0.5002 |
| 0.1753 | 4.0 | 2140 | 0.7501 | 0.5309 |
| 0.1251 | 5.0 | 2675 | 0.8695 | 0.5222 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
AyushPJ/ai-club-inductions-21-nlp-distilBERT | AyushPJ | 2021-10-20T23:38:45Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-distilBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-distilBERT
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cu110
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/s66jewelevans | huggingtweets | 2021-10-20T23:06:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/s66jewelevans/1634771194675/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313199276852342784/fJ8Lb2C__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jewel Evans</div>
<div style="text-align: center; font-size: 14px;">@s66jewelevans</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jewel Evans.
| Data | Jewel Evans |
| --- | --- |
| Tweets downloaded | 1714 |
| Retweets | 2 |
| Short tweets | 20 |
| Tweets kept | 1692 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ec5yuuj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @s66jewelevans's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxbfdnt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxbfdnt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/s66jewelevans')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bochaowei/t5-small-finetuned-xsum-wei1 | bochaowei | 2021-10-20T18:33:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | 20% of the training data
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 27.5875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5287
- Rouge1: 27.5875
- Rouge2: 7.4083
- Rougel: 21.5654
- Rougelsum: 21.5716
- Gen Len: 18.8205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7677 | 1.0 | 3401 | 2.5441 | 27.4235 | 7.2208 | 21.3535 | 21.3636 | 18.8311 |
| 2.735 | 2.0 | 6802 | 2.5287 | 27.5875 | 7.4083 | 21.5654 | 21.5716 | 18.8205 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
monologg/koelectra-base-discriminator | monologg | 2021-10-20T16:55:57Z | 1,292 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"korean",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
>>> tokenizer.tokenize("[CLS] ํ๊ตญ์ด ELECTRA๋ฅผ ๊ณต์ ํฉ๋๋ค. [SEP]")
['[CLS]', 'ํ๊ตญ์ด', 'E', '##L', '##EC', '##T', '##RA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ๊ตญ์ด', 'E', '##L', '##EC', '##T', '##RA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
sentence = "๋๋ ๋ฐฉ๊ธ ๋ฐฅ์ ๋จน์๋ค."
fake_sentence = "๋๋ ๋ด์ผ ๋ฐฅ์ ๋จน์๋ค."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
monologg/koelectra-base-generator | monologg | 2021-10-20T16:55:00Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"korean",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA (Base Generator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-generator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer.tokenize("[CLS] ํ๊ตญ์ด ELECTRA๋ฅผ ๊ณต์ ํฉ๋๋ค. [SEP]")
['[CLS]', 'ํ๊ตญ์ด', 'E', '##L', '##EC', '##T', '##RA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ๊ตญ์ด', 'E', '##L', '##EC', '##T', '##RA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForMaskedLM
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="monologg/koelectra-base-generator",
tokenizer="monologg/koelectra-base-generator"
)
print(fill_mask("๋๋ {} ๋ฐฅ์ ๋จน์๋ค.".format(fill_mask.tokenizer.mask_token)))
```
|
monologg/koelectra-base-v3-discriminator | monologg | 2021-10-20T16:53:40Z | 31,234 | 30 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"korean",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA v3 (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v3-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer.tokenize("[CLS] ํ๊ตญ์ด ELECTRA๋ฅผ ๊ณต์ ํฉ๋๋ค. [SEP]")
['[CLS]', 'ํ๊ตญ์ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', 'ํ๊ตญ์ด', 'EL', '##EC', '##TRA', '##๋ฅผ', '๊ณต์ ', '##ํฉ๋๋ค', '.', '[SEP]'])
[2, 11229, 29173, 13352, 25541, 4110, 7824, 17788, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v3-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
sentence = "๋๋ ๋ฐฉ๊ธ ๋ฐฅ์ ๋จน์๋ค."
fake_sentence = "๋๋ ๋ด์ผ ๋ฐฅ์ ๋จน์๋ค."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
jbarry/irish-gpt2 | jbarry | 2021-10-20T16:40:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | This model was trained on the OSCAR ga dataset for experimental purposes. The files used for training the tokenizer and model are included in this repository. |
YushiUeda/test | YushiUeda | 2021-10-20T14:48:21Z | 4 | 0 | espnet | [
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- diarization
language:
datasets:
- mini_librispeech
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `YushiUeda/test`
This model was trained by Yushi Ueda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 4dfa2be4331d3d68f124aa5fd81f63217a7278a4
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/test
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Wed Aug 25 23:29:07 EDT 2021`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `19bcd34f9395e01e54a97c4db5ecbcedb429dd92`
- Commit date: `Tue Aug 24 19:50:44 2021 -0400`
## `diar_train_diar_raw_max_epoch20`
### DER
`dev_clean_2_ns2_beta2_500`
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med1_collar0.0|32.42|
|result_th0.3_med11_collar0.0|32.03|
|result_th0.4_med1_collar0.0|30.96|
|result_th0.4_med11_collar0.0|30.26|
|result_th0.5_med1_collar0.0|30.35|
|result_th0.5_med11_collar0.0|29.37|
|result_th0.6_med1_collar0.0|30.77|
|result_th0.6_med11_collar0.0|29.52|
|result_th0.7_med1_collar0.0|32.60|
|result_th0.7_med11_collar0.0|31.03|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw_max_epoch20
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
loss_type: pit
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
required:
- output_dir
version: 0.10.2a1
distributed: false
```
</details>
|
Monsia/autonlp-tweets-classification-23044997 | Monsia | 2021-10-20T14:38:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:Monsia/autonlp-data-tweets-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- Monsia/autonlp-data-tweets-classification
co2_eq_emissions: 4.819872182577655
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 23044997
- CO2 Emissions (in grams): 4.819872182577655
## Validation Metrics
- Loss: 0.001594889909029007
- Accuracy: 0.9997478885667465
- Macro F1: 0.9991190902836993
- Micro F1: 0.9997478885667465
- Weighted F1: 0.9997476735518704
- Macro Precision: 0.9998014460161265
- Micro Precision: 0.9997478885667465
- Weighted Precision: 0.9997479944069787
- Macro Recall: 0.9984426545713851
- Micro Recall: 0.9997478885667465
- Weighted Recall: 0.9997478885667465
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Monsia/autonlp-tweets-classification-23044997
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
pere/norwegian-gptneo-blue-highlr | pere | 2021-10-20T10:57:21Z | 2 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # Norwegian GTPNeo Blue.
The first Norwegian GPTNeo model. This one is trained only on a administrative corpus. |
aditeyabaral/sentencetransformer-distilbert-hinglish-small | aditeyabaral | 2021-10-20T09:04:04Z | 173 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mrm8488/t5-base-finetuned-break_data | mrm8488 | 2021-10-20T08:31:28Z | 962 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:break_data",
"arxiv:1910.10683",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- break_data
widget:
- text: "paraphrase: The composer of Sands Theme plays what type of guitar?"
---
# T5-base fine-tuned on break_data / QDMR-high-level โโก๏ธ๐
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **QDMRs**.
## Details of T5 ๐ โก๏ธ ๐
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new โColossal Clean Crawled Corpusโ, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (QDMRs) - Dataset ๐
Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| break_data | train | 17503 |
| break_data | valid | 3130 |
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning ๐๏ธโ
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*.
## Model in Action ๐
```python
# Tip: By now, install transformers from source
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data")
def get_decomposition(question):
input_text = "paraphrase: %s </s>" % question
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=32)
return tokenizer.decode(output[0])
question = "The composer of Sands Theme plays what type of guitar?"
get_decomposition(question)
# output: 'return Sands Theme ;return composer of #1 ;return guitar that #2 plays'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
aditeyabaral/sentencetransformer-bert-hinglish-small | aditeyabaral | 2021-10-20T06:28:16Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
chrisjay/masakhane_benchmarks | chrisjay | 2021-10-20T05:55:51Z | 0 | 0 | null | [
"african-languages",
"machine-translation",
"text",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: african-languages
tags:
- african-languages
- machine-translation
- text
license: apache-2.0
model-index:
- name: Masakhane Benchmark Models
results:
- task:
name: Machine Translation
type: machine-translation
dataset:
name: masakhane benchmarks
args: african-languages
---
# Interacting with the Masakhane Benchmark Models
I created this demo for very easy interaction with the [benchmark models on Masakhane](https://github.com/masakhane-io/masakhane-mt/tree/master/benchmarks) which were trained with [JoeyNMT](https://github.com/chrisemezue/joeynmt)(my forked version).
To access the space click [here](https://huggingface.co/spaces/chrisjay/masakhane-benchmarks).
To include your language, all you need to do is:
1. Create a folder in the format *src-tgt/main* for your language pair, if it does not exist.
2. Inside the *main* folder put the following files:
1. model checkpoint. Rename it to `best.ckpt`.
2. `config.yaml` file. This is the JoeyNMT config file which loads the model an pre-processing parameters.
3. `src_vocab.txt` file.
4. `trg_vocab.txt` file.
The space currently supports these languages:
| source language | target language |
|:---------------:|:---------------:|
| English | Swahili |
| English | Afrikaans |
| English | Arabic |
| English | Urhobo |
| English | แบธฬdรณ |
| Efik | English |
| English | Hausa |
| English | Igbo |
| English | Fon |
| English | Twi |
| English | Dendi |
| English | แบธฬsรกn |
| English | Isoko |
| English | Kamba |
| English | Luo |
| English | Southern Ndebele |
| English | Tshivenda |
| Shona | English |
| Swahili | English |
| Yoruba | English |
TO DO:
1. Include more languages from the benchmark. |
Manishl7/xlm-roberta-large-language-detection | Manishl7 | 2021-10-20T05:20:44Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | Language Detection Model for Nepali, English, Hindi and Spanish
Model fine tuned on xlm-roberta-large |
aditeyabaral/sentencetransformer-distilbert-hinglish-big | aditeyabaral | 2021-10-20T01:24:00Z | 153 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
yazdipour/text-to-sparql-t5-base-qald9 | yazdipour | 2021-10-19T23:25:20Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-base-2021-10-19_23-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-base-2021-10-19_23-02
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 1.8300 | 19.0 | 0.3640 | 0.0346 | 0.1943 | 10.0358 | [72.88988261598658, 50.27455765710799, 35.93015446608462, 28.454070201643017] | 0.2281 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
aguilara42/audacity-Wav2Vec2-Base | aguilara42 | 2021-10-19T21:23:28Z | 0 | 1 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
---
# Text to Speech Model
## Being used for the `Audio Labeler` effect in Audacity
metadata:
```
{
metadata = {
'sample_rate': 16000,
'domain_tags': ['speech'],
'short_description': 'I will label your speech into text :]',
'long_description':
'This is an Audacity wrapper for the model, '
'forked from the repository '
'facebook/s2t-medium-librispeech-asr'
'This model was trained by Changhan Wang'
'and Yun Tang and Xutai Ma and Anne Wu'
'and Dmytro Okhonko and Juan Pino.',
'tags': ['speech-to-text'],
'effect_type': 'waveform-to-labels',
'multichannel': False,
'labels': ["<pad>", "<s>", "</s>", "<unk>", "|", "E", "T", "A", "O", "N", "I", "H", "S", "R", "D", "L", "U", "M", "W", "C", "F", "G", "Y", "P", "B", "V", "K", "'", "X", "J", "Q", "Z"],
}
``` |
aditeyabaral/sentencetransformer-bert-hinglish-big | aditeyabaral | 2021-10-19T19:38:38Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-bert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
hugggof/ConvTasNet-DAMP-Vocals | hugggof | 2021-10-19T19:28:08Z | 0 | 2 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
sample_rate: 8000
---
This is an Audacity wrapper for the model, forked from the repository `groadabike/ConvTasNet_DAMP-VSEP_enhboth`,
This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid.
The following info was copied directly from `groadabike/ConvTasNet_DAMP-VSEP_enhboth`:
### Description:
This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset.
### Training config:
```yaml
data:
channels: 1
n_src: 2
root_path: data
sample_rate: 16000
samples_per_track: 10
segment: 3.0
task: enh_both
filterbank:
kernel_size: 20
n_filters: 256
stride: 10
main_args:
exp_dir: exp/train_convtasnet
help: None
masknet:
bn_chan: 256
conv_kernel_size: 3
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 4
n_src: 2
norm_type: gLN
skip_chan: 256
optim:
lr: 0.0003
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 12
early_stop: True
epochs: 50
half_lr: True
num_workers: 12
```
### Results:
```yaml
si_sdr: 14.018196157142519
si_sdr_imp: 14.017103133809577
sdr: 14.498517291333885
sdr_imp: 14.463389151567865
sir: 24.149634529133372
sir_imp: 24.11450638936735
sar: 15.338597389045935
sar_imp: -137.30634122401517
stoi: 0.7639416744417206
stoi_imp: 0.1843383526963759
```
### License notice:
This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
|
hugggof/ConvTasNet_Libri3Mix_sepnoisy_16k | hugggof | 2021-10-19T19:26:57Z | 0 | 1 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
---
This is an Audacity wrapper for the model, forked from the repository `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`,
This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid.
The following info was copied directly from `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`:
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
|
hugggof/ConvTasNet_WHAM_sepclean | hugggof | 2021-10-19T19:25:37Z | 0 | 0 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- audacity
inference: false
---
This is an Audacity wrapper for the model, forked from the repository mpariente/ConvTasNet_WHAM_sepclean,
This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid.
The following info was copied from `mpariente/ConvTasNet_WHAM_sepclean`:
### Description:
This model was trained by Manuel Pariente
using the wham/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the WHAM! dataset.
### Training config:
```yaml
data:
n_src: 2
mode: min
nondefault_nsrc: None
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/tr/
valid_dir: data/wav8k/min/cv/
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/wham
gpus: -1
help: None
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 2
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments:
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
### Results:
```yaml
si_sdr: 16.21326632846293
si_sdr_imp: 16.21441705664987
sdr: 16.615180021738933
sdr_imp: 16.464137807433435
sir: 26.860503975131923
sir_imp: 26.709461760826414
sar: 17.18312813480803
sar_imp: -131.99332048277296
stoi: 0.9619940905157323
stoi_imp: 0.2239480672473015
```
### License notice:
This work "ConvTasNet_WHAM!_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"ConvTasNet_WHAM!_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Manuel Pariente. |
hugggof/demucs_extra | hugggof | 2021-10-19T19:23:31Z | 0 | 0 | null | [
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags: audacity
---
## Music Source Separation in the Waveform Domain
This is the Demucs model, serialized from Facebook Research's pretrained models.
From Facebook research:
Demucs is based on U-Net convolutional architecture inspired by Wave-U-Net and SING, with GLUs, a BiLSTM between the encoder and decoder, specific initialization of weights and transposed convolutions in the decoder.
This is the `demucs_extra` version, meaning that is was trained on the MusDB dataset, along with 150 extra songs of data.
See [facebookresearch's repository](https://github.com/facebookresearch/demucs) for more information on Demucs. |
huggingtweets/gerardsans | huggingtweets | 2021-10-19T19:13:05Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/gerardsans/1634670781074/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1431241007421665284/qoHnns8I_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">แธGerardSans/แณ๐คฃ๐ฌ๐ง</div>
<div style="text-align: center; font-size: 14px;">@gerardsans</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from แธGerardSans/แณ๐คฃ๐ฌ๐ง.
| Data | แธGerardSans/แณ๐คฃ๐ฌ๐ง |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 648 |
| Short tweets | 586 |
| Tweets kept | 2016 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/115pr1rh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gerardsans's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10heg4by) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10heg4by/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gerardsans')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
maxspaziani/bert-base-italian-xxl-uncased-finetuned-ComunaliRoma | maxspaziani | 2021-10-19T17:58:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-italian-xxl-uncased-finetuned-ComunaliRoma
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-italian-xxl-uncased-finetuned-ComunaliRoma
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6717 | 1.0 | 1014 | 2.6913 |
| 2.4869 | 2.0 | 2028 | 2.5843 |
| 2.3411 | 3.0 | 3042 | 2.5095 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
meghana/hitalmqa-finetuned-squad | meghana | 2021-10-19T17:34:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hitalmqa-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hitalmqa-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab | patrickvonplaten | 2021-10-19T17:18:47Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-turkish-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-turkish-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4055
- Wer: 0.4800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 |
| 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 |
| 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 |
| 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 |
| 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 |
| 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 |
| 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Recognai/selectra_small | Recognai | 2021-10-19T15:28:17Z | 6 | 5 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language:
- es
thumbnail: "url to a thumbnail used in social sharing"
license: apache-2.0
datasets:
- oscar
---
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| **SELECTRA small** | **12** | **256** | **22M** | **50k** | **512** | **True** |
| [SELECTRA medium](https://huggingface.co/Recognai/selectra_medium) | 12 | 384 | 41M | 50k | 512 | True |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon)) |
Fhrozen/test_an4 | Fhrozen | 2021-10-19T15:20:32Z | 3 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:an4",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- an4
license: cc-by-4.0
---
## ESPnet2 ASR model
### `Fhrozen/test_an4`
This model was trained by Fhrozen using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b8df4c928e132acff78d196988bdb68a66987952
pip install -e .
cd egs2/an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Fhrozen/test_an4
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 20 00:00:46 JST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `b8df4c928e132acff78d196988bdb68a66987952`
- Commit date: `Tue Oct 19 07:48:11 2021 -0400`
## asr_train_raw_en_bpe30
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|773|4.0|22.3|73.7|0.1|96.1|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|591|2.7|21.8|75.5|0.0|97.3|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2565|17.2|16.4|66.4|1.0|83.8|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|1915|15.5|16.4|68.1|0.9|85.5|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2695|21.1|15.6|63.3|0.9|79.9|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|2015|19.4|15.6|65.0|0.9|81.5|100.0|
## ASR config
<details><summary>expand</summary>
```
config: null
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_raw_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe30/train/speech_shape
- exp/asr_stats_raw_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe30/valid/speech_shape
- exp/asr_stats_raw_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf: {}
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- โ
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe30/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: rnn
encoder_conf: {}
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_train_lm_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 256
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/lm_stats_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- โ
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
lm: seq_rnn
lm_conf:
unit: 650
nlayers: 2
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
|
doc2query/all-with_prefix-t5-base-v1 | doc2query | 2021-10-19T12:52:47Z | 1,990 | 10 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/reddit-title-body",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- sentence-transformers/reddit-title-body
- sentence-transformers/embedding-training-data
widget:
- text: "text2reddit: Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/all-with_prefix-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/all-with_prefix-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
prefix = "answer2question"
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
text = prefix+": "+text
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 575k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers).
The datasets include besides others:
- (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body)
- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
- (title, review) pairs from Amazon reviews
- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
- (question, duplicate_question) from Quora and WikiAnswers
- (title, abstract) pairs from S2ORC
## Prefix
This model was trained **with a prefix**: You start the text with a specific index that defines what type out output text you would like to receive. Depending on the prefix, the output is different.
E.g. the above text about Python produces the following output:
| Prefix | Output |
| --- | --- |
| answer2question | Why should I use python in my business? ; What is the difference between Python and.NET? ; what is the python design philosophy? |
| review2title | Python a powerful and useful language ; A new and improved programming language ; Object-oriented, practical and accessibl |
| abstract2title | Python: A Software Development Platform ; A Research Guide for Python X: Conceptual Approach to Programming ; Python : Language and Approach |
| text2query | is python a low level language? ; what is the primary idea of python? ; is python a programming language? |
These are all available pre-fixes:
- text2reddit
- question2title
- answer2question
- abstract2title
- review2title
- news2title
- text2query
- question2question
For the datasets and weights for the different pre-fixes see `data_config.json` in this repository.
|
Jeska/autonlp-vaccinfaq-22144706 | Jeska | 2021-10-19T12:33:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:Jeska/autonlp-data-vaccinfaq",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- Jeska/autonlp-data-vaccinfaq
co2_eq_emissions: 27.135492487925884
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22144706
- CO2 Emissions (in grams): 27.135492487925884
## Validation Metrics
- Loss: 1.81697416305542
- Accuracy: 0.6377269139700079
- Macro F1: 0.5181293370145044
- Micro F1: 0.6377269139700079
- Weighted F1: 0.631117826235572
- Macro Precision: 0.5371452512845428
- Micro Precision: 0.6377269139700079
- Weighted Precision: 0.6655055695465463
- Macro Recall: 0.5609328178925124
- Micro Recall: 0.6377269139700079
- Weighted Recall: 0.6377269139700079
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jeska/autonlp-vaccinfaq-22144706
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
ringabelle/bert-base-cased-finetuned-COVID-tweets | ringabelle | 2021-10-19T11:38:14Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-COVID-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-COVID-tweets
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 194 | 2.4419 |
| No log | 2.0 | 388 | 2.4230 |
| 2.5821 | 3.0 | 582 | 2.3678 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-small | yazdipour | 2021-10-19T11:17:46Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-19_10-17_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3129461705684662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-19_10-17_lastDS
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Gen Len: 19.0
- P: 0.5580
- R: 0.0884
- F1: 0.3129
- Score: 5.9585
- Bleu-precisions: [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271]
- Bleu-bp: 0.0763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
DeepESP/gpt2-spanish | DeepESP | 2021-10-19T08:52:48Z | 5,155 | 36 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"es",
"dataset:ebooks",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: es
tags:
- GPT-2
- Spanish
- ebooks
- nlg
datasets:
- ebooks
widget:
- text: "Quisiera saber que va a suceder"
license: mit
---
# GPT2-Spanish
GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
## Corpus
This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization).
## Tokenizer
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
## Training
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
## Authors
The model was trained by Alejandro Oรฑate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
Thanks to the members of the community who collaborated with funding for the initial tests.
## Cautions
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
hiiamsid/autonlp-Summarization-20684328 | hiiamsid | 2021-10-19T05:09:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autonlp",
"es",
"dataset:hiiamsid/autonlp-data-Summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: es
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- hiiamsid/autonlp-data-Summarization
co2_eq_emissions: 1133.9679082840014
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 20684328
- CO2 Emissions (in grams): 1133.9679082840014
## Validation Metrics
- Loss: nan
- Rouge1: 9.4193
- Rouge2: 0.91
- RougeL: 7.9376
- RougeLsum: 8.0076
- Gen Len: 10.65
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684328
``` |
yazdipour/text-to-sparql-t5-small-2021-10-18_23-00 | yazdipour | 2021-10-19T00:01:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_23-00
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_23-00
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2284
- Gen Len: 19.0
- Bertscorer-p: 0.5644
- Bertscorer-r: 0.0815
- Bertscorer-f1: 0.3120
- Sacrebleu-score: 5.5690
- Sacrebleu-precisions: [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607]
- Bleu-bp: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:---------------------------------------------------------------------------:|:-------:|
| 0.2808 | 1.0 | 4772 | 0.2284 | 19.0 | 0.5644 | 0.0815 | 0.3120 | 5.5690 | [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607] | 0.0728 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
mmcquade11/autonlp-imdb-test-21134442 | mmcquade11 | 2021-10-18T20:16:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-imdb-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- mmcquade11/autonlp-data-imdb-test
co2_eq_emissions: 298.7849611952843
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21134442
- CO2 Emissions (in grams): 298.7849611952843
## Validation Metrics
- Loss: 0.21618066728115082
- Accuracy: 0.9393
- Precision: 0.9360730593607306
- Recall: 0.943
- AUC: 0.98362804
- F1: 0.9395237620803029
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134442
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134442", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134442", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
gagan3012/pickuplines | gagan3012 | 2021-10-18T19:53:36Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pickuplines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pickuplines
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-base-2021-10-18_16-15 | yazdipour | 2021-10-18T18:58:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-base-2021-10-18_16-15
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-18_16-15
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1294
- Gen Len: 19.0
- Bertscorer-p: 0.5827
- Bertscorer-r: 0.0812
- Bertscorer-f1: 0.3202
- Sacrebleu-score: 5.9410
- Sacrebleu-precisions: [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601]
- Bleu-bp: 0.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| nan | 1.0 | 4772 | 0.1294 | 19.0 | 0.5827 | 0.0812 | 0.3202 | 5.9410 | [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601] | 0.0721 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
mmcquade11/autonlp-imdb-test-21134453 | mmcquade11 | 2021-10-18T17:47:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-imdb-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- mmcquade11/autonlp-data-imdb-test
co2_eq_emissions: 38.102565360610484
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21134453
- CO2 Emissions (in grams): 38.102565360610484
## Validation Metrics
- Loss: 0.172550767660141
- Accuracy: 0.9355
- Precision: 0.9362853135644159
- Recall: 0.9346
- AUC: 0.98267064
- F1: 0.9354418977079372
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134453
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
cambridgeltl/trans-encoder-bi-simcse-roberta-base | cambridgeltl | 2021-10-18T13:29:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2109.13059",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-base
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-roberta-base](https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
yazdipour/text-to-sparql-t5-small-2021-10-18_12-12 | yazdipour | 2021-10-18T13:14:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_12-12
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_12-12
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Gen Len: 19.0
- Bertscorer-p: 0.5420
- Bertscorer-r: 0.0732
- Bertscorer-f1: 0.2972
- Sacrebleu-score: 4.8763
- Sacrebleu-precisions: [87.2581084764241, 73.48869132519009, 64.19139944127409, 58.342420937840785]
- Bleu-bp: 0.0697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.4209 | 1.0 | 4772 | 0.3284 | 19.0 | 0.5420 | 0.0732 | 0.2972 | 4.8763 | [87.2581084764241, 73.48869132519009, 64.19139944127409, 58.342420937840785] | 0.0697 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-small-2021-10-18_09-32 | yazdipour | 2021-10-18T10:33:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-18_09-32
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.26458749175071716
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_09-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5119
- Gen Len: 19.0
- P: 0.4884
- R: 0.0583
- F1: 0.2646
- Score: 3.5425
- Bleu-precisions: [82.80295919500207, 62.695879280325016, 50.2215675749897, 44.03052700138759]
- Bleu-bp: 0.0609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.7088 | 1.0 | 4772 | 0.5119 | 19.0 | 0.4884 | 0.0583 | 0.2646 | 3.5425 | [82.80295919500207, 62.695879280325016, 50.2215675749897, 44.03052700138759] | 0.0609 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Ching/negation_detector | Ching | 2021-10-18T10:32:43Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | This question answering model was fine tuned to detect negation expressions
How to use:
question: negation
context: That is not safe!
Answer: not
question: negation
context: Weren't we going to go to the moon?
Answer: Weren't
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | CAMeL-Lab | 2021-10-18T10:18:01Z | 134 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุนุงู
ู ุงูู ุ'
---
# CAMeLBERT-CA POS-EGY Model
## Model description
**CAMeLBERT-CA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy')
>>> text = 'ุนุงู
ู ุงูู ุ'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9990943, 'index': 1, 'word': 'ุนุงู
ู', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.99863535, 'index': 2, 'word': 'ุงูู', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99990875, 'index': 3, 'word': 'ุ', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | CAMeL-Lab | 2021-10-18T10:15:57Z | 12 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุนุงู
ู ุงูู ุ'
---
# CAMeLBERT-Mix POS-EGY Model
## Model description
**CAMeLBERT-Mix POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy')
>>> text = 'ุนุงู
ู ุงูู ุ'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9972628, 'index': 1, 'word': 'ุนุงู
ู', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9525163, 'index': 2, 'word': 'ุงูู', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99869114, 'index': 3, 'word': 'ุ', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | CAMeL-Lab | 2021-10-18T10:15:37Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุนุงู
ู ุงูู ุ'
---
# CAMeLBERT-DA POS-EGY Model
## Model description
**CAMeLBERT-DA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy')
>>> text = 'ุนุงู
ู ุงูู ุ'
>>> pos(text)
[{'entity': 'adj', 'score': 0.99843216, 'index': 1, 'word': 'ุนุงู
ู', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9990083, 'index': 2, 'word': 'ุงูู', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.82973784, 'index': 3, 'word': 'ุ', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | CAMeL-Lab | 2021-10-18T10:13:34Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุดูููู ุ ุดุฎุจุงุฑู ุ'
---
# CAMeLBERT-CA POS-GLF Model
## Model description
**CAMeLBERT-CA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf')
>>> text = 'ุดูููู ุ ุดุฎุจุงุฑู ุ'
>>> pos(text)
[{'entity': 'noun', 'score': 0.99572617, 'index': 1, 'word': 'ุดููู', 'start': 0, 'end': 4}, {'entity': 'noun', 'score': 0.9411187, 'index': 2, 'word': '##ู', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999661, 'index': 3, 'word': 'ุ', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.99286526, 'index': 4, 'word': 'ุด', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9983397, 'index': 5, 'word': '##ุฎุจุงุฑ', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9609381, 'index': 6, 'word': '##ู', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999668, 'index': 7, 'word': 'ุ', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | CAMeL-Lab | 2021-10-18T09:58:40Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุดูููู ุ ุดุฎุจุงุฑู ุ'
---
# CAMeLBERT-DA POS-GLF Model
## Model description
**CAMeLBERT-DA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf')
>>> text = 'ุดูููู ุ ุดุฎุจุงุฑู ุ'
>>> pos(text)
[{'entity': 'noun', 'score': 0.84596395, 'index': 1, 'word': 'ุดููู', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.7230489, 'index': 2, 'word': '##ู', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.99996364, 'index': 3, 'word': 'ุ', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9990874, 'index': 4, 'word': 'ุด', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99985224, 'index': 5, 'word': '##ุฎุจุงุฑ', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9988868, 'index': 6, 'word': '##ู', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999683, 'index': 7, 'word': 'ุ', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | CAMeL-Lab | 2021-10-18T09:44:57Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน'
---
# CAMeLBERT-CA POS-MSA Model
## Model description
**CAMeLBERT-CA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa')
>>> text = 'ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999758, 'index': 1, 'word': 'ุฅู
ุงุฑุฉ', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997559, 'index': 2, 'word': 'ุฃุจูุธุจู', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.99996257, 'index': 3, 'word': 'ูู', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9958452, 'index': 4, 'word': 'ุฅุญุฏู', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999635, 'index': 5, 'word': 'ุฅู
ุง', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99991685, 'index': 6, 'word': '##ุฑุงุช', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997497, 'index': 7, 'word': 'ุฏููุฉ', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999795, 'index': 8, 'word': 'ุงูุฅู
ุงุฑุงุช', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99924207, 'index': 9, 'word': 'ุงูุนุฑุจูุฉ', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99994195, 'index': 10, 'word': 'ุงูู
ุชุญุฏุฉ', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9997414, 'index': 11, 'word': 'ุงูุณุจุน', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa | CAMeL-Lab | 2021-10-18T09:34:42Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน'
---
# CAMeLBERT-MSA POS-MSA Model
## Model description
**CAMeLBERT-MSA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa')
>>> text = 'ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999764, 'index': 1, 'word': 'ุฅู
ุงุฑุฉ', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.99991846, 'index': 2, 'word': 'ุฃุจูุธุจู', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998356, 'index': 3, 'word': 'ูู', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99368894, 'index': 4, 'word': 'ุฅุญุฏู', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999426, 'index': 5, 'word': 'ุฅู
ุง', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.9999339, 'index': 6, 'word': '##ุฑุงุช', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99996775, 'index': 7, 'word': 'ุฏููุฉ', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99996895, 'index': 8, 'word': 'ุงูุฅู
ุงุฑุงุช', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990183, 'index': 9, 'word': 'ุงูุนุฑุจูุฉ', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999347, 'index': 10, 'word': 'ุงูู
ุชุญุฏุฉ', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99931145, 'index': 11, 'word': 'ุงูุณุจุน', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
yazdipour/text-to-sparql-t5-base-2021-10-17_23-40 | yazdipour | 2021-10-18T02:23:08Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-17_23-40
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2649857699871063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-17_23-40
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2645
- Gen Len: 19.0
- P: 0.5125
- R: 0.0382
- F1: 0.2650
- Score: 5.1404
- Bleu-precisions: [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422]
- Bleu-bp: 0.0707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3513 | 1.0 | 4807 | 0.2645 | 19.0 | 0.5125 | 0.0382 | 0.2650 | 5.1404 | [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422] | 0.0707 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
airKlizz/bart-large-multi-fr-wiki-news | airKlizz | 2021-10-17T20:10:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: fr
license: mit
---
|
yazdipour/text-to-sparql-t5-small-2021-10-17_18-47 | yazdipour | 2021-10-17T19:48:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-17_18-47
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2345714420080185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-17_18-47
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5258
- Gen Len: 19.0
- P: 0.4582
- R: 0.0278
- F1: 0.2346
- Score: 3.5848
- Bleu-precisions: [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059]
- Bleu-bp: 0.0631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.7575 | 1.0 | 4807 | 0.5258 | 19.0 | 0.4582 | 0.0278 | 0.2346 | 3.5848 | [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059] | 0.0631 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
MariamD/my-t5-qa-legal | MariamD | 2021-10-17T13:20:41Z | 2 | 1 | transformers | [
"transformers",
"pytorch",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
language: english
datasets:
- legal dataset
pipeline_tag: question-answering
--- |
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | CAMeL-Lab | 2021-10-17T12:10:17Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู [SEP] ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'
---
# CAMeLBERT-Mix Poetry Classification Model
## Model description
**CAMeLBERT-Mix Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู' ,'ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'],
['ูู
ููู
ุนูู
ููู ุงูุชุจุฌููุง' ,'ูุงุฏ ุงูู
ุนูู
ุงู ูููู ุฑุณููุง']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'ุงูุจุณูุท', 'score': 0.9937475919723511},
{'label': 'ุงููุงู
ู', 'score': 0.971284031867981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | CAMeL-Lab | 2021-10-17T12:09:38Z | 13 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู [SEP] ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'
---
# CAMeLBERT-CA Poetry Classification Model
## Model description
**CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู' ,'ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'],
['ูู
ููู
ุนูู
ููู ุงูุชุจุฌููุง' ,'ูุงุฏ ุงูู
ุนูู
ุงู ูููู ุฑุณููุง']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'ุงูุจุณูุท', 'score': 0.9845284819602966},
{'label': 'ุงููุงู
ู', 'score': 0.752918004989624}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | CAMeL-Lab | 2021-10-17T12:08:30Z | 475 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "ุฃูุง ุจุฎูุฑ"
---
# CAMeLBERT MSA SA Model
## Model description
**CAMeLBERT MSA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment")
>>> sentences = ['ุฃูุง ุจุฎูุฑ', 'ุฃูุง ูุณุช ุจุฎูุฑ']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment')
>>> sentences = ['ุฃูุง ุจุฎูุฑ', 'ุฃูุง ูุณุช ุจุฎูุฑ']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
castorini/monot5-base-msmarco-10k | castorini | 2021-10-17T11:24:22Z | 3,178 | 14 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
This model usually has a better zero-shot performance than `monot5-base-msmarco`, i.e., it performs better on datasets different from MS MARCO.
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
castorini/monot5-large-msmarco | castorini | 2021-10-17T11:20:56Z | 576 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs).
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | CAMeL-Lab | 2021-10-17T11:15:12Z | 35 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "ุฃูุง ุจุฎูุฑ"
---
# CAMeLBERT-CA SA Model
## Model description
**CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['ุฃูุง ุจุฎูุฑ', 'ุฃูุง ูุณุช ุจุฎูุฑ']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
e
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['ุฃูุง ุจุฎูุฑ', 'ุฃูุง ูุณุช ุจุฎูุฑ']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | CAMeL-Lab | 2021-10-17T11:07:13Z | 1,851 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน"
---
# CAMeLBERT MSA NER Model
## Model description
**CAMeLBERT MSA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678).
"* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> sentence = simple_word_tokenize('ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> ner("ุฅู
ุงุฑุฉ ุฃุจูุธุจู ูู ุฅุญุฏู ุฅู
ุงุฑุงุช ุฏููุฉ ุงูุฅู
ุงุฑุงุช ุงูุนุฑุจูุฉ ุงูู
ุชุญุฏุฉ ุงูุณุจุน")
[{'word': 'ุฃุจูุธุจู',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'ุงูุฅู
ุงุฑุงุช',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'ุงูุนุฑุจูุฉ',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'ุงูู
ุชุญุฏุฉ',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | CAMeL-Lab | 2021-10-17T11:05:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "ุนุงู
ู ุงูู ุ"
---
# CAMeLBERT-Mix DID NADI Model
## Model description
**CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi')
>>> sentences = ['ุนุงู
ู ุงูู ุ', 'ุดูููู ุ ุดุฎุจุงุฑู ุ']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.920274019241333},
{'label': 'Saudi_Arabia', 'score': 0.26750022172927856}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
gwima/ryan-sackmott | gwima | 2021-10-17T03:15:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
|
huggingtweets/rias_hot | huggingtweets | 2021-10-17T02:28:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/rias_hot/1634437684641/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1427157680863522818/jqfniv6o_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">RiasHot</div>
<div style="text-align: center; font-size: 14px;">@rias_hot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from RiasHot.
| Data | RiasHot |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 136 |
| Short tweets | 905 |
| Tweets kept | 2204 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2bwco1hp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rias_hot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/n6fp7izq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/n6fp7izq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rias_hot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061 | amansolanki | 2021-10-17T00:32:35Z | 1,906 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:amansolanki/autonlp-data-Tweet-Sentiment-Extraction",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- amansolanki/autonlp-data-Tweet-Sentiment-Extraction
co2_eq_emissions: 3.651199395353127
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 20114061
- CO2 Emissions (in grams): 3.651199395353127
## Validation Metrics
- Loss: 0.5046541690826416
- Accuracy: 0.8036219581211093
- Macro F1: 0.807095210403678
- Micro F1: 0.8036219581211093
- Weighted F1: 0.8039634739225368
- Macro Precision: 0.8076842795233988
- Micro Precision: 0.8036219581211093
- Weighted Precision: 0.8052135235094771
- Macro Recall: 0.8075241470527056
- Micro Recall: 0.8036219581211093
- Weighted Recall: 0.8036219581211093
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
pere/norwegian-gptneo-red-highlr | pere | 2021-10-16T19:14:34Z | 3 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # Norwegian GTPNeo Blue.
The first Norwegian GPTNeo model. This one is trained only on a administrative corpus. |
huggingtweets/the_nftking | huggingtweets | 2021-10-16T14:11:01Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/the_nftking/1634393457706/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434700639649599488/J63TSf--_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NFT KING ๐</div>
<div style="text-align: center; font-size: 14px;">@the_nftking</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NFT KING ๐.
| Data | NFT KING ๐ |
| --- | --- |
| Tweets downloaded | 163 |
| Retweets | 23 |
| Short tweets | 36 |
| Tweets kept | 104 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26d96n9m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_nftking's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/f7wd0e6f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/f7wd0e6f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/the_nftking')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/6ix9ine | huggingartists | 2021-10-16T12:01:20Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/6ix9ine",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/6ix9ine
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b2b164a7c6c02dd0843ad597df5dbf4b.1000x1000x1.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค HuggingArtists Model ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">6ix9ine</div>
<a href="https://genius.com/artists/6ix9ine">
<div style="text-align: center; font-size: 14px;">@6ix9ine</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from 6ix9ine.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/6ix9ine).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/6ix9ine")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/eqmcaj0r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 6ix9ine's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/s5dpg3h2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/s5dpg3h2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/6ix9ine')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/6ix9ine")
model = AutoModelWithLMHead.from_pretrained("huggingartists/6ix9ine")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
lewtun/xlm-roberta-base-finetuned-marc-de | lewtun | 2021-10-16T11:38:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9934
- Mae: 0.4867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1514 | 1.0 | 308 | 1.0455 | 0.5221 |
| 0.9997 | 2.0 | 616 | 0.9934 | 0.4867 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Yuri/xlm-roberta-base-finetuned-marc | Yuri | 2021-10-16T11:36:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Mae: 0.4956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1432 | 1.0 | 308 | 1.0559 | 0.5133 |
| 0.9883 | 2.0 | 616 | 0.9825 | 0.4956 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ashish-chouhan/xlm-roberta-base-finetuned-marc | ashish-chouhan | 2021-10-16T11:34:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0171
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1404 | 1.0 | 308 | 1.0720 | 0.5398 |
| 0.9805 | 2.0 | 616 | 1.0171 | 0.5310 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
DongHyoungLee/distilbert-base-uncased-finetuned-cola | DongHyoungLee | 2021-10-16T11:30:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.535587402888147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7335
- Matthews Correlation: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 |
| 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 |
| 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 |
| 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 |
| 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
lewtun/xlm-roberta-base-finetuned-marc | lewtun | 2021-10-15T21:10:49Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9932
- Mae: 0.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.05 | 1.0 | 860 | 1.0007 | 0.5074 |
| 0.9166 | 2.0 | 1720 | 0.9932 | 0.4838 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Subsets and Splits