modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mrm8488/electricidad-base-generator | mrm8488 | 2020-12-11T21:54:10Z | 7 | 3 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"es",
"arxiv:1406.2661",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: es
thumbnail: https://i.imgur.com/uxAvBfh.png
widget:
- text: "Madrid es una ciudad muy [MASK] en España."
---
## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh)
**Electricidad-base-generator** (uncased) is a ```base``` Electra like model (generator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus.
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Fast example of usage 🚀
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="mrm8488/electricidad-base-generator",
tokenizer="mrm8488/electricidad-base-generator"
)
print(
fill_mask(f"HuggingFace está creando {fill_mask.tokenizer.mask_token} que la comunidad usa para resolver tareas de NLP.")
)
# Output: [{'sequence': '[CLS] huggingface esta creando herramientas que la comunidad usa para resolver tareas de nlp. [SEP]', 'score': 0.0896105170249939, 'token': 8760, 'token_str': 'herramientas'}, ...]
```
## Acknowledgments
I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)).
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/electra-small-finetuned-squadv1 | mrm8488 | 2020-12-11T21:53:59Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"en",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
---
# Electra small ⚡ + SQuAD v1 ❓
[Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'google/electra-small-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1.json' \
--predict_file '/content/dataset/dev-v1.1.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **77.70** |
| **F1** | **85.74** |
| **Size**| **50 MB** |
Very good metrics for such a "small" model!
```json
{
'exact': 77.70104068117313,
'f1': 85.73991234187997,
'total': 10570,
'HasAns_exact': 77.70104068117313,
'HasAns_f1': 85.73991234187997,
'HasAns_total': 10570,
'best_exact': 77.70104068117313,
'best_exact_thresh': 0.0,
'best_f1': 85.73991234187997,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-small-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.7950334108113424, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/electra-base-finetuned-squadv1 | mrm8488 | 2020-12-11T21:53:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"en",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: en
---
# Electra base ⚡ + SQuAD v1 ❓
[Electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'google/electra-base-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1.json' \
--predict_file '/content/dataset/dev-v1.1.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **83.03** |
| **F1** | **90.77** |
| **Size**| **+ 400 MB** |
Very good metrics for such a "small" model!
```json
{
'exact': 83.03689687795648,
'f1': 90.77486052446231,
'total': 10570,
'HasAns_exact': 83.03689687795648,
'HasAns_f1': 90.77486052446231,
'HasAns_total': 10570,
'best_exact': 83.03689687795648,
'best_exact_thresh': 0.0,
'best_f1': 90.77486052446231,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.9995211430099182, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
moumeneb1/flaubert-base-cased-ecology_crisis | moumeneb1 | 2020-12-11T21:51:41Z | 5 | 0 | transformers | [
"transformers",
"flaubert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | # Flaubert-base-cased-ecology_crisis
An adapted [__Flaubert/Flaubert_base-cased model__](https://github.com/getalp/Flaubert) Trained further on a Language modeling Task of unlabeled French tweets used to create the [CrisisDataset](https://github.com/DiegoKoz/french_ecological_crisis), The intermediate task of masqued language modeling helped us improve the results on our [paper](http://www.sciencedirect.com/science/article/pii/S0306457320300650) compared to the standard flaubert-base-cased model.
If you use this pretrained model on your work, please cite us as follows 🤗
```
@article{Kozlowski-et-al2020,
title = "A three-level classification of French tweets in ecological crises",
journal = "Information Processing & Management",
volume = "57",
number = "5",
pages = "102284",
year = "2020",
issn = "0306-4573",
doi = "https://doi.org/10.1016/j.ipm.2020.102284",
url = "http://www.sciencedirect.com/science/article/pii/S0306457320300650",
author = "Diego Kozlowski and Elisa Lannelongue and Frédéric Saudemont and Farah Benamara and Alda Mari and Véronique Moriceau and Abdelmoumene Boumadane",
keywords = "Crisis response from social media, Machine learning, Natural language processing, Transfer learning",
}
```
|
m3hrdadfi/bert2bert-fa-wiki-summary | m3hrdadfi | 2020-12-11T21:50:20Z | 37 | 2 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: fa
license: apache-2.0
tags:
- summarization
---
A Bert2Bert model on the Wiki Summary dataset to summarize articles. The model achieved an 8.47 ROUGE-2 score.
For more detail, please follow the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary) repo.
## Eval results
The following table summarizes the ROUGE scores obtained by the Bert2Bert model.
| % | Precision | Recall | FMeasure |
|:-------:|:---------:|:------:|:--------:|
| ROUGE-1 | 28.14 | 30.86 | 27.34 |
| ROUGE-2 | 07.12 | 08.47* | 07.10 |
| ROUGE-L | 28.49 | 25.87 | 25.50 |
## Questions?
Post a Github issue on the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary/issues) repo.
|
m3hrdadfi/bert2bert-fa-news-headline | m3hrdadfi | 2020-12-11T21:50:16Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: fa
license: apache-2.0
tags:
- summarization
---
A Bert2Bert model on VoA Persian Corpus (a medium-sized corpus of 7.9 million words, 2003-2008) generates headlines. The model achieved a 25.30 ROUGE-2 score.
For more detail, please follow the [News Headline Generation](https://github.com/m3hrdadfi/news-headline-generation) repo.
## Eval results
The following table summarizes the ROUGE scores obtained by the Bert2Bert model.
| % | Precision | Recall | FMeasure |
|:-------:|:---------:|:------:|:--------:|
| ROUGE-1 | 43.78 | 45.52 | 43.54 |
| ROUGE-2 | 24.50 | 25.30* | 24.24 |
| ROUGE-L | 41.20 | 42.22 | 40.76 |
## Questions?
Post a Github issue on the [News Headline Generation](https://github.com/hooshvare/news-headline-generation/issues) repo.
|
loodos/electra-small-turkish-uncased-discriminator | loodos | 2020-12-11T21:49:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish ELECTRA-Small-discriminator (uncased)
This is ELECTRA-Small model's discriminator which has 12 encoder layers with 256 hidden layer size trained on uncased Turkish dataset.
## Usage
Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("loodos/electra-small-turkish-uncased-discriminator", do_lower_case=False)
model = AutoModelWithLMHead.from_pretrained("loodos/electra-small-turkish-uncased-discriminator")
normalizer = TextNormalization()
normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True)
tokenizer.tokenize(normalized_text)
```
### Notes on Tokenizers
Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons.
1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions
- "I" and "İ" to 'i'
- 'i' and 'ı' to 'I'
respectively. However, in Turkish, 'I' and 'İ' are two different letters.
We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models).
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
loodos/electra-base-turkish-uncased-discriminator | loodos | 2020-12-11T21:49:30Z | 58 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish ELECTRA-Base-discriminator (uncased)
This is ELECTRA-Base model's discriminator which has the same structure with BERT-Base trained on uncased Turkish dataset.
## Usage
Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("loodos/electra-base-turkish-uncased-discriminator", do_lower_case=False)
model = AutoModelWithLMHead.from_pretrained("loodos/electra-base-turkish-uncased-discriminator")
normalizer = TextNormalization()
normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True)
tokenizer.tokenize(normalized_text)
```
### Notes on Tokenizers
Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons.
1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions
- "I" and "İ" to 'i'
- 'i' and 'ı' to 'I'
respectively. However, in Turkish, 'I' and 'İ' are two different letters.
We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models).
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
ktrapeznikov/albert-xlarge-v2-squad-v2 | ktrapeznikov | 2020-12-11T21:48:41Z | 1,736 | 2 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
krevas/finance-koelectra-small-discriminator | krevas | 2020-12-11T21:48:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ko",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-small-discriminator")
sentence = "내일 해당 종목이 대폭 상승할 것이다"
fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
krevas/finance-koelectra-base-generator | krevas | 2020-12-11T21:48:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-base-generator",
tokenizer="krevas/finance-koelectra-base-generator"
)
print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
kiri-ai/distiluse-base-multilingual-cased-et | kiri-ai | 2020-12-11T21:48:24Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"et",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: et
---
## Model Description
This model is based off **Sentence-Transformer's** `distiluse-base-multilingual-cased` multilingual model that has been extended to understand sentence embeddings in Estonian.
## Sentence-Transformers
This model can be imported directly via the SentenceTransformers package as shown below:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('kiri-ai/distiluse-base-multilingual-cased-et')
sentences = ['Here is a sample sentence','Another sample sentence']
embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(embeddings)
```
## Fine-tuning
The fine-tuning and training processes were inspired by [sbert's](https://www.sbert.net/) multilingual training techniques which are available [here](https://www.sbert.net/examples/training/multilingual/README.html). The documentation shows and explains the step-by-step process of using parallel sentences to train models in a different language.
### Resources
The model was fine-tuned on English-Estonian parallel sentences taken from [OPUS](http://opus.nlpl.eu/) and [ParaCrawl](https://paracrawl.eu/).
|
jplu/tf-xlm-roberta-base | jplu | 2020-12-11T21:48:00Z | 4,839 | 1 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # Tensorflow XLM-RoBERTa
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
## XLM-RoBERTa
[XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
## Model Weights
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5)
| `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5)
## Usage
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
```python
from transformers import TFXLMRobertaModel
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")
```
Or
```
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")
```
## Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/jplu).
## Acknowledgments
Thanks to all the Huggingface team for the support and their amazing library!
|
indobenchmark/indobert-lite-large-p1 | indobenchmark | 2020-12-11T21:45:56Z | 40 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
indobenchmark/indobert-lite-base-p1 | indobenchmark | 2020-12-11T21:45:50Z | 261 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"albert",
"feature-extraction",
"indobert",
"indobenchmark",
"indonlu",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"license:mit",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Base Model (phase1 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p1")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p1")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
illuin/camembert-base-fquad | illuin | 2020-12-11T21:45:27Z | 506 | 7 | transformers | [
"transformers",
"pytorch",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: fr
tags:
- question-answering
- camembert
license: gpl-3.0
datasets:
- fquad
---
# camembert-base-fquad
## Description
A native French Question Answering model [CamemBERT-base](https://camembert-model.fr/) fine-tuned on [FQuAD](https://fquad.illuin.tech/).
## Evaluation results
On the development set.
```shell
{"f1": 88.1, "exact_match": 78.1}
```
On the test set.
```shell
{"f1": 88.3, "exact_match": 78.0}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='illuin/camembert-base-fquad', tokenizer='illuin/camembert-base-fquad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Citation
If you use our work, please cite:
```bibtex
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
|
healx/gpt-2-pubmed-large | healx | 2020-12-11T21:43:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2004.13845",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | GPT-2 (774M model) finetuned on 0.5m PubMed abstracts. Used in the [writemeanabstract.com](writemeanabstract.com) and the following preprint:
[Papanikolaou, Yannis, and Andrea Pierleoni. "DARE: Data Augmented Relation Extraction with GPT-2." arXiv preprint arXiv:2004.13845 (2020).](https://arxiv.org/abs/2004.13845)
|
elgeish/cs224n-squad2.0-distilbert-base-uncased | elgeish | 2020-12-11T21:39:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
## Results
```json
{
"exact": 65.16946363935504,
"f1": 67.87348075352251,
"total": 6078,
"HasAns_exact": 69.51890034364261,
"HasAns_f1": 75.16667217179045,
"HasAns_total": 2910,
"NoAns_exact": 61.17424242424242,
"NoAns_f1": 61.17424242424242,
"NoAns_total": 3168,
"best_exact": 65.16946363935504,
"best_exact_thresh": 0.0,
"best_f1": 67.87348075352243,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "distilbert-base-uncased-distilled-squad",
"model_type": "distilbert",
"num_train_epochs": 4,
"per_gpu_train_batch_size": 32,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 32,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
elgeish/cs224n-squad2.0-albert-large-v2 | elgeish | 2020-12-11T21:38:57Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"exbert",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- exbert
---
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-large-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 79.2694965449161,
"f1": 82.50844352970152,
"total": 6078,
"HasAns_exact": 74.87972508591065,
"HasAns_f1": 81.64478342732858,
"HasAns_total": 2910,
"NoAns_exact": 83.30176767676768,
"NoAns_f1": 83.30176767676768,
"NoAns_total": 3168,
"best_exact": 79.2694965449161,
"best_exact_thresh": 0.0,
"best_f1": 82.50844352970155,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 1,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "albert-large-v2",
"model_type": "albert",
"num_train_epochs": 5,
"per_gpu_train_batch_size": 8,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 8,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
elgeish/cs224n-squad2.0-albert-base-v2 | elgeish | 2020-12-11T21:38:54Z | 1,062 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"exbert",
"arxiv:2004.07067",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- exbert
---
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-base-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 78.94044093451794,
"f1": 81.7724930324639,
"total": 6078,
"HasAns_exact": 76.28865979381443,
"HasAns_f1": 82.20385314478195,
"HasAns_total": 2910,
"NoAns_exact": 81.37626262626263,
"NoAns_f1": 81.37626262626263,
"NoAns_total": 3168,
"best_exact": 78.95689371503784,
"best_exact_thresh": 0.0,
"best_f1": 81.78894581298378,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "albert-base-v2",
"model_type": "albert",
"num_train_epochs": 3,
"per_gpu_train_batch_size": 8,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 8,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
txus/calbert-base-uncased | txus | 2020-12-11T21:36:11Z | 11 | 1 | transformers | [
"transformers",
"pytorch",
"albert",
"masked-lm",
"catalan",
"exbert",
"ca",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: "ca"
tags:
- masked-lm
- catalan
- exbert
license: mit
---
# Calbert: a Catalan Language Model
## Introduction
CALBERT is an open-source language model for Catalan pretrained on the ALBERT architecture.
It is now available on Hugging Face in its `tiny-uncased` version and `base-uncased` (the one you're looking at) as well, and was pretrained on the [OSCAR dataset](https://traces1.inria.fr/oscar/).
For further information or requests, please go to the [GitHub repository](https://github.com/codegram/calbert)
## Pre-trained models
| Model | Arch. | Training data |
| ----------------------------------- | -------------- | ---------------------- |
| `codegram` / `calbert-tiny-uncased` | Tiny (uncased) | OSCAR (4.3 GB of text) |
| `codegram` / `calbert-base-uncased` | Base (uncased) | OSCAR (4.3 GB of text) |
## How to use Calbert with HuggingFace
#### Load Calbert and its tokenizer:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codegram/calbert-base-uncased")
model = AutoModel.from_pretrained("codegram/calbert-base-uncased")
model.eval() # disable dropout (or leave in train mode to finetune
```
#### Filling masks using pipeline
```python
from transformers import pipeline
calbert_fill_mask = pipeline("fill-mask", model="codegram/calbert-base-uncased", tokenizer="codegram/calbert-base-uncased")
results = calbert_fill_mask("M'agrada [MASK] això")
# results
# [{'sequence': "[CLS] m'agrada molt aixo[SEP]", 'score': 0.614592969417572, 'token': 61},
# {'sequence': "[CLS] m'agrada moltíssim aixo[SEP]", 'score': 0.06058056280016899, 'token': 4867},
# {'sequence': "[CLS] m'agrada més aixo[SEP]", 'score': 0.017195818945765495, 'token': 43},
# {'sequence': "[CLS] m'agrada llegir aixo[SEP]", 'score': 0.016321714967489243, 'token': 684},
# {'sequence': "[CLS] m'agrada escriure aixo[SEP]", 'score': 0.012185849249362946, 'token': 1306}]
```
#### Extract contextual embedding features from Calbert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("M'és una mica igual")
# ['▁m', "'", 'es', '▁una', '▁mica', '▁igual']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [2, 109, 7, 71, 36, 371, 1103, 3]
# NB: Can be done in one step : tokenize.encode("M'és una mica igual")
# Feed tokens to Calbert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = model(encoded_sentence)
embeddings.size()
# torch.Size([1, 8, 768])
embeddings.detach()
# tensor([[[-0.0261, 0.1166, -0.1075, ..., -0.0368, 0.0193, 0.0017],
# [ 0.1289, -0.2252, 0.9881, ..., -0.1353, 0.3534, 0.0734],
# [-0.0328, -1.2364, 0.9466, ..., 0.3455, 0.7010, -0.2085],
# ...,
# [ 0.0397, -1.0228, -0.2239, ..., 0.2932, 0.1248, 0.0813],
# [-0.0261, 0.1165, -0.1074, ..., -0.0368, 0.0193, 0.0017],
# [-0.1934, -0.2357, -0.2554, ..., 0.1831, 0.6085, 0.1421]]])
```
## Authors
CALBERT was trained and evaluated by [Txus Bach](https://twitter.com/txustice), as part of [Codegram](https://www.codegram.com)'s applied research.
<a href="https://huggingface.co/exbert/?model=codegram/calbert-base-uncased&modelKind=bidirectional&sentence=M%27agradaria%20força%20saber-ne%20més">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
clue/xlnet_chinese_large | clue | 2020-12-11T21:36:08Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"xlnet",
"zh",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: zh
---
## xlnet_chinese_large
### Overview
**Language model:** xlnet-large
**Model size:** 1.3G
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
```
import torch
from transformers import XLNetTokenizer,XLNetModel
tokenizer = XLNetTokenizer.from_pretrained("clue/xlnet_chinese_large")
xlnet = XLNetModel.from_pretrained("clue/xlnet_chinese_large")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
clue/albert_chinese_tiny | clue | 2020-12-11T21:35:55Z | 120 | 17 | transformers | [
"transformers",
"pytorch",
"albert",
"zh",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: zh
---
## albert_chinese_tiny
### Overview
**Language model:** albert-tiny
**Model size:** 16M
**Language:** Chinese
**Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020)
**Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE)
### Results
For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE).
### Usage
**NOTE:**Since sentencepiece is not used in `albert_chinese_tiny` model, you have to call **BertTokenizer** instead of AlbertTokenizer !!!
```
import torch
from transformers import BertTokenizer, AlbertModel
tokenizer = BertTokenizer.from_pretrained("clue/albert_chinese_tiny")
albert = AlbertModel.from_pretrained("clue/albert_chinese_tiny")
```
### About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark
Website: https://www.cluebenchmarks.com/
|
almanach/camembert-base-ccnet | almanach | 2020-12-11T21:35:15Z | 63 | 1 | transformers | [
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-ccnet")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-ccnet", tokenizer="camembert/camembert-base-ccnet")
results = camembert_fill_mask("Le camembert est <mask> :)")
# results
#[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.14011502265930176, 'token': 305},
# {'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.13929404318332672, 'token': 11661},
# {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.07010319083929062, 'token': 3497},
# {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.025885622948408127, 'token': 2528},
# {'sequence': '<s> Le camembert est top :)</s>', 'score': 0.025684962049126625, 'token': 2328}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[ 0.0667, -0.2467, 0.0954, ..., 0.2144, 0.0279, 0.3621],
# [-0.0472, 0.4092, -0.6602, ..., 0.2095, 0.1391, -0.0401],
# [ 0.1911, -0.2347, -0.0811, ..., 0.4306, -0.0639, 0.1821],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-ccnet", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[ 0.0057, -0.1022, 0.0163, ..., -0.0675, -0.0360, 0.1078],
# [-0.1096, -0.3344, -0.0593, ..., 0.1625, -0.0432, -0.1646],
# [ 0.3751, -0.3829, 0.0844, ..., 0.1067, -0.0330, 0.3334],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
almanach/camembert-base-ccnet-4gb | almanach | 2020-12-11T21:35:11Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-ccnet-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-ccnet-4gb", tokenizer="camembert/camembert-base-ccnet-4gb")
results = camembert_fill_mask("Le camembert est-il <mask> ?")
# results
#[{'sequence': '<s> Le camembert est-il sain?</s>', 'score': 0.07001790404319763, 'token': 10286},
#{'sequence': '<s> Le camembert est-il français?</s>', 'score': 0.057594332844018936, 'token': 384},
#{'sequence': '<s> Le camembert est-il bon?</s>', 'score': 0.04098724573850632, 'token': 305},
#{'sequence': '<s> Le camembert est-il périmé?</s>', 'score': 0.03486393392086029, 'token': 30862},
#{'sequence': '<s> Le camembert est-il cher?</s>', 'score': 0.021535946056246758, 'token': 1604}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[ 0.0331, 0.0095, -0.2776, ..., 0.2875, -0.0827, -0.2467],
# [-0.1348, 0.0478, -0.5409, ..., 0.8330, 0.0467, 0.0662],
# [ 0.0920, -0.0264, 0.0177, ..., 0.1112, 0.0108, -0.1123],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-ccnet-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0144, 0.1855, 0.4895, ..., -0.1537, 0.0107, -0.2293],
# [-0.6664, -0.0880, -0.1539, ..., 0.3635, 0.4047, 0.1258],
# [ 0.0511, 0.0540, 0.2545, ..., 0.0709, -0.0288, -0.0779],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
aliosm/ai-soco-cpp-roberta-tiny-clas | aliosm | 2020-12-11T21:32:44Z | 0 | 0 | null | [
"exbert",
"authorship-identification",
"fire2020",
"pan2020",
"ai-soco",
"classification",
"dataset:ai-soco",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: "c++"
tags:
- exbert
- authorship-identification
- fire2020
- pan2020
- ai-soco
- classification
license: "mit"
datasets:
- ai-soco
metrics:
- accuracy
---
# ai-soco-c++-roberta-tiny-clas
## Model description
`ai-soco-c++-roberta-tiny` model fine-tuned on [AI-SOCO](https://sites.google.com/view/ai-soco-2020) task.
#### How to use
You can use the model directly after tokenizing the text using the provided tokenizer with the model files.
#### Limitations and bias
The model is limited to C++ programming language only.
## Training data
The model initialized from [`ai-soco-c++-roberta-tiny`](https://github.com/huggingface/transformers/blob/master/model_cards/aliosm/ai-soco-c++-roberta-tiny) model and trained using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset to do text classification.
## Training procedure
The model trained on Google Colab platform using V100 GPU for 10 epochs, 32 batch size, 512 max sequence length (sequences larger than 512 were truncated). Each continues 4 spaces were converted to a single tab character (`\t`) before tokenization.
## Eval results
The model achieved 87.66%/87.46% accuracy on AI-SOCO task and ranked in the 9th place.
### BibTeX entry and citation info
```bibtex
@inproceedings{ai-soco-2020-fire,
title = "Overview of the {PAN@FIRE} 2020 Task on {Authorship Identification of SOurce COde (AI-SOCO)}",
author = "Fadel, Ali and Musleh, Husam and Tuffaha, Ibraheem and Al-Ayyoub, Mahmoud and Jararweh, Yaser and Benkhelifa, Elhadj and Rosso, Paolo",
booktitle = "Proceedings of The 12th meeting of the Forum for Information Retrieval Evaluation (FIRE 2020)",
year = "2020"
}
```
<a href="https://huggingface.co/exbert/?model=aliosm/ai-soco-c++-roberta-tiny-clas">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
aliosm/ai-soco-cpp-roberta-tiny-96 | aliosm | 2020-12-11T21:32:42Z | 0 | 0 | null | [
"exbert",
"authorship-identification",
"fire2020",
"pan2020",
"ai-soco",
"dataset:ai-soco",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: "c++"
tags:
- exbert
- authorship-identification
- fire2020
- pan2020
- ai-soco
license: "mit"
datasets:
- ai-soco
metrics:
- perplexity
---
# ai-soco-c++-roberta-tiny-96
## Model description
From scratch pre-trained RoBERTa model with 1 layers and 96 attention heads using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which consists of C++ codes crawled from CodeForces website.
## Intended uses & limitations
The model can be used to do code classification, authorship identification and other downstream tasks on C++ programming language.
#### How to use
You can use the model directly after tokenizing the text using the provided tokenizer with the model files.
#### Limitations and bias
The model is limited to C++ programming language only.
## Training data
The model initialized randomly and trained using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which contains 100K C++ source codes.
## Training procedure
The model trained on Google Colab platform with 8 TPU cores for 200 epochs, 16\*8 batch size, 512 max sequence length and MLM objective. Other parameters were defaulted to the values mentioned in [`run_language_modelling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) script. Each continues 4 spaces were converted to a single tab character (`\t`) before tokenization.
### BibTeX entry and citation info
```bibtex
@inproceedings{ai-soco-2020-fire,
title = "Overview of the {PAN@FIRE} 2020 Task on {Authorship Identification of SOurce COde (AI-SOCO)}",
author = "Fadel, Ali and Musleh, Husam and Tuffaha, Ibraheem and Al-Ayyoub, Mahmoud and Jararweh, Yaser and Benkhelifa, Elhadj and Rosso, Paolo",
booktitle = "Proceedings of The 12th meeting of the Forum for Information Retrieval Evaluation (FIRE 2020)",
year = "2020"
}
```
<a href="https://huggingface.co/exbert/?model=aliosm/ai-soco-c++-roberta-tiny-96">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
akhooli/mbart-large-cc25-en-ar | akhooli | 2020-12-11T21:32:08Z | 32 | 3 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"en",
"ar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
tags:
- translation
language:
- en
- ar
license: mit
---
### mbart-large-en-ar
This is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
|
ahotrod/electra_large_discriminator_squad2_512 | ahotrod | 2020-12-11T21:31:42Z | 22,523 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ## ELECTRA_large_discriminator language model fine-tuned on SQuAD2.0
### with the following results:
```
"exact": 87.09677419354838,
"f1": 89.98343832723452,
"total": 11873,
"HasAns_exact": 84.66599190283401,
"HasAns_f1": 90.44759839056285,
"HasAns_total": 5928,
"NoAns_exact": 89.52060555088309,
"NoAns_f1": 89.52060555088309,
"NoAns_total": 5945,
"best_exact": 87.09677419354838,
"best_exact_thresh": 0.0,
"best_f1": 89.98343832723432,
"best_f1_thresh": 0.0
```
### from script:
```
python ${EXAMPLES}/run_squad.py \
--model_type electra \
--model_name_or_path google/electra-large-discriminator \
--do_train \
--do_eval \
--train_file ${SQUAD}/train-v2.0.json \
--predict_file ${SQUAD}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--warmup_steps 306 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--per_gpu_train_batch_size 8 \
--gradient_accumulation_steps 16 \
--per_gpu_eval_batch_size 128 \
--fp16 \
--fp16_opt_level O1 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--output_dir ${MODEL_PATH}
```
### using the following system & software:
```
Transformers: 2.11.0
PyTorch: 1.5.0
TensorFlow: 2.2.0
Python: 3.8.1
OS/Platform: Linux-5.3.0-59-generic-x86_64-with-glibc2.10
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
```
|
cinmodel/electra-small-japanese-discriminator | cinmodel | 2020-12-11T21:26:13Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
language: ja
license: apache-2.0
---
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
## How to use the discriminator in `transformers`
```
from transformers import BertJapaneseTokenizer, ElectraForPreTraining
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-discriminator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForPreTraining.from_pretrained('Cinnamon/electra-small-japanese-discriminator')
```
|
nielsr/tapas-base | nielsr | 2020-12-11T11:12:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tapas",
"feature-extraction",
"sequence-classification",
"en",
"arxiv:2004.02349",
"arxiv:2010.00571",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
---
# TAPAS base model
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `revision="v1"`, which corresponds to `tapas_inter_masklm_base`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
jointly train these randomly initialized classification heads with the base model on a downstream task.
## Intended uses & limitations
You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Pre-training
The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details.
The optimizer used is Adam with a learning rate of 5e-5, and a warmup
ratio of 0.01.
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
dbmdz/flair-historic-ner-lft | dbmdz | 2020-12-11T10:41:44Z | 17 | 1 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"license:mit",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
inference: false
license: mit
---
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the LFT dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 76.32 | 76.13 | **76.36** | 76.27
| Test | 77.07 | 77.35 | 77.20 | 77.21
Paper reported an averaged F1-score of 77.51.
† denotes that this model is selected for upload.
|
bewgle/bart-large-mnli-bewgle | bewgle | 2020-12-09T18:30:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
widget :
- text: "I like you. </s></s> I love you."
---
## bart-large-mnli
Trained by Facebook, [original source](https://github.com/pytorch/fairseq/tree/master/examples/bart)
|
google/t5-11b-ssm-wqo | google | 2020-12-07T08:47:33Z | 0 | 1 | null | [
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:web_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- c4
- wikipedia
- web_questions
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Web Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-wqo**|**40.8**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-wqo|42.8|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-wqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-wqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
gael1130/gael_first_model | gael1130 | 2020-12-05T12:54:42Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | I am adding my first README in order to test the interface. How good is it really? |
Parth/mT5-question-generator | Parth | 2020-12-01T03:38:27Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained("Parth/mT5-question-generator")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
|
julien-c/flair-de-ner | julien-c | 2020-11-26T21:59:38Z | 12 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:conll2003",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
datasets:
- conll2003
inference: false
---
## Flair NER model `de-ner-conll03-v0.4.pt`
Imported from https://nlp.informatik.hu-berlin.de/resources/models/de-ner/
### Demo: How to use in Flair
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence(
"Mein Name ist Julien, ich lebe zurzeit in Paris, ich arbeite bei Hugging Face, Inc."
)
tagger = SequenceTagger.load("julien-c/flair-de-ner")
# predict NER tags
tagger.predict(sentence)
# print sentence with predicted tags
print(sentence.to_tagged_string())
```
yields the following output:
> `Mein Name ist Julien <S-PER> , ich lebe zurzeit in Paris <S-LOC> , ich arbeite bei Hugging <B-ORG> Face <E-ORG> , Inc <S-ORG> .`
### Thanks [@stefan-it](https://huggingface.co/stefan-it) for the Flair integration ❤️ 🔥
|
sshleifer/distill-pegasus-xsum-16-8 | sshleifer | 2020-10-08T03:05:56Z | 50 | 1 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
deep-learning-analytics/triviaqa-t5-base | deep-learning-analytics | 2020-09-30T18:50:48Z | 55 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"triviaqa",
"t5-base",
"lm-head",
"question-answering",
"closed-book",
"pipeline:question-answering",
"eng",
"dataset:triviaqa",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language: "eng"
tags:
- triviaqa
- t5-base
- pytorch
- lm-head
- question-answering
- closed-book
- t5
- pipeline:question-answering
datasets:
- triviaqa
widget:
- text: ["Mount Everest is found in which mountain range?","None"]
metrics:
- EM: 17
- Subset match: 24.5
---
# Model name
Closed Book Trivia-QA T5 base
## Model description
This is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5
We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/build-a-trivia-bot-using-t5-transformer-345ff83205b6).
Test the model on Trivia Questions from the websites below:
https://www.triviaquestionss.com/easy-trivia-questions/
https://laffgaff.com/easy-trivia-questions-and-answers/
## Usage
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/triviaqa-t5-base")
model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/triviaqa-t5-base")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
text = "Who directed the movie Jaws?"
preprocess_text = text.strip().replace("\n","")
tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device)
outs = model.model.generate(
tokenized_text,
max_length=10,
num_beams=2,
early_stopping=True
)
dec = [tokenizer.decode(ids) for ids in outs]
print("Predicted Answer: ", dec)
```
|
Capreolus/electra-base-msmarco | Capreolus | 2020-09-08T14:53:10Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the [TFElectraRelevanceHead](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) in the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_electra_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
|
ishan/distilbert-base-uncased-mnli | ishan | 2020-08-21T10:23:40Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:MNLI",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# distilbert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8223 |
| Mismatched | 0.8216 |
|
textattack/facebook-bart-base-RTE | textattack | 2020-08-20T15:50:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ## TextAttack Model CardSince this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7256317689530686, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
sampathkethineedi/industry-classification | sampathkethineedi | 2020-07-16T15:27:38Z | 1,545 | 22 | transformers | [
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"tensorflow",
"industry",
"buisiness",
"description",
"multi-class",
"classification",
"en",
"autotrain_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: "en"
thumbnail: "https://huggingface.co/sampathkethineedi"
tags:
- distilbert
- pytorch
- tensorflow
- text-classification
- industry
- buisiness
- description
- multi-class
- classification
liscence: "mit"
inference: false
---
# industry-classification
## Model description
DistilBERT Model to classify a business description into one of **62 industry tags**.
Trained on 7000 samples of Business Descriptions and associated labels of companies in India.
## How to use
PyTorch and TF models available
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification")
model = AutoModelForSequenceClassification.from_pretrained("sampathkethineedi/industry-classification")
industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.")
'''Ouput'''
[{'label': 'Consumer Finance', 'score': 0.9841355681419373}]
```
## Limitations and bias
Training data is only for Indian companies
|
textattack/albert-base-v2-snli | textattack | 2020-07-06T16:36:47Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the snli dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 64.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9060150375939849, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-rotten-tomatoes | textattack | 2020-07-06T16:36:02Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8395872420262664, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-rotten-tomatoes | textattack | 2020-07-06T16:35:34Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8808630393996247, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-imdb | textattack | 2020-07-06T16:35:25Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 512.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.95352, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-WNLI | textattack | 2020-07-06T16:33:44Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 128, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 0 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-STS-B | textattack | 2020-07-06T16:33:08Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 8, a learning
rate of 5e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.8892630070017784, as measured by the
eval set pearson correlation, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-STS-B | textattack | 2020-07-06T16:32:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.9064220351504577, as measured by the
eval set pearson correlation, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-SST-2 | textattack | 2020-07-06T16:32:15Z | 178 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 64.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9254587155963303, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-RTE | textattack | 2020-07-06T16:31:05Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.776173285198556, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/distilbert-base-uncased-CoLA | textattack | 2020-07-06T16:29:03Z | 3,039 | 3 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8235858101629914, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-CoLA | textattack | 2020-07-06T16:28:50Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8245445829338447, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/albert-base-v2-rotten_tomatoes | textattack | 2020-06-25T20:00:46Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ## albert-base-v2 fine-tuned with TextAttack on the rotten_tomatoes dataset
This `albert-base-v2` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 128, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8855534709193246, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
sshleifer/opus-mt-CELTIC-en | sshleifer | 2020-05-14T13:13:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ### opus-mt-INSULAR_CELTIC-en
* source languages: ga,cy,br,gd,kw,gv
* target languages: en
* OPUS readme: [ga+cy+br+gd+kw+gv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ga+cy+br+gd+kw+gv-en/README.md)
* dataset: opus+techiaith+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+techiaith+bt-2020-04-30.zip](https://object.pouta.csc.fi/OPUS-MT-models/ga+cy+br+gd+kw+gv-en/opus+techiaith+bt-2020-04-30.zip)
* test set translations: [opus+techiaith+bt-2020-04-30.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ga+cy+br+gd+kw+gv-en/opus+techiaith+bt-2020-04-30.test.txt)
* test set scores: [opus+techiaith+bt-2020-04-30.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ga+cy+br+gd+kw+gv-en/opus+techiaith+bt-2020-04-30.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ga.en | 28.4 | 0.446 |
|
Alexshake78/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel | Alexshake78 | 2025-06-03T23:29:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am darting endangered eel",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:56:54Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am darting endangered eel
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alexshake78/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-darting_endangered_eel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Soughing/gla_large | Soughing | 2025-06-04T00:41:08Z | 42 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T03:28:08Z | ---
license: apache-2.0
---
|
pmk2021/vampnet_small-tria-d1026-l8-h8-mode-vampnet_rms-hchroma-36c-top3-latest | pmk2021 | 2025-06-04T00:41:06Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-04T00:17:18Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Soughing/gla_medium | Soughing | 2025-06-04T00:40:54Z | 58 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T21:29:20Z | ---
license: apache-2.0
---
|
YuchenLi01/generatedMoreUniqueResponseNoGTv2_Qwen2.5-1.5BInstruct_dpo_ebs32_lr1e-06_beta1.0_42 | YuchenLi01 | 2025-06-04T00:40:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T15:55:17Z | ---
library_name: transformers
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: generatedMoreUniqueResponseNoGTv2_Qwen2.5-1.5BInstruct_dpo_ebs32_lr1e-06_beta1.0_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generatedMoreUniqueResponseNoGTv2_Qwen2.5-1.5BInstruct_dpo_ebs32_lr1e-06_beta1.0_42
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6454
- Rewards/chosen: -2.0856
- Rewards/rejected: -4.6048
- Rewards/accuracies: 0.7259
- Rewards/margins: 2.5191
- Logps/rejected: -61.4843
- Logps/chosen: -46.6232
- Logits/rejected: -2.1125
- Logits/chosen: -2.2575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.785 | 0.0060 | 20 | 0.7354 | 0.0110 | 0.0060 | 0.5053 | 0.0050 | -56.8735 | -44.5266 | -2.1689 | -2.2863 |
| 0.8103 | 0.0120 | 40 | 0.7294 | 0.0113 | -0.0037 | 0.5080 | 0.0150 | -56.8832 | -44.5263 | -2.1705 | -2.2880 |
| 0.7139 | 0.0180 | 60 | 0.7323 | -0.0066 | -0.0191 | 0.5067 | 0.0125 | -56.8987 | -44.5442 | -2.1654 | -2.2830 |
| 0.8449 | 0.0240 | 80 | 0.7273 | 0.0187 | -0.0111 | 0.5107 | 0.0298 | -56.8907 | -44.5189 | -2.1656 | -2.2832 |
| 0.758 | 0.0300 | 100 | 0.7234 | -0.0185 | -0.0484 | 0.5254 | 0.0299 | -56.9279 | -44.5561 | -2.1644 | -2.2826 |
| 0.7231 | 0.0360 | 120 | 0.7051 | -0.0162 | -0.1146 | 0.5709 | 0.0984 | -56.9941 | -44.5537 | -2.1631 | -2.2820 |
| 0.6726 | 0.0420 | 140 | 0.6992 | -0.0527 | -0.1550 | 0.5709 | 0.1023 | -57.0345 | -44.5902 | -2.1600 | -2.2795 |
| 0.7139 | 0.0480 | 160 | 0.6813 | -0.1036 | -0.2725 | 0.5963 | 0.1689 | -57.1521 | -44.6412 | -2.1547 | -2.2752 |
| 0.5948 | 0.0540 | 180 | 0.6609 | 0.0104 | -0.2448 | 0.6497 | 0.2552 | -57.1243 | -44.5271 | -2.1659 | -2.2880 |
| 0.6842 | 0.0600 | 200 | 0.6667 | -0.1141 | -0.3838 | 0.6110 | 0.2697 | -57.2633 | -44.6517 | -2.1536 | -2.2769 |
| 0.6169 | 0.0660 | 220 | 0.6471 | 0.0135 | -0.3235 | 0.6404 | 0.3370 | -57.2030 | -44.5241 | -2.1622 | -2.2873 |
| 0.6752 | 0.0720 | 240 | 0.6409 | 0.0088 | -0.3846 | 0.6511 | 0.3934 | -57.2641 | -44.5287 | -2.1605 | -2.2863 |
| 0.6502 | 0.0780 | 260 | 0.6362 | -0.0716 | -0.5089 | 0.6497 | 0.4373 | -57.3885 | -44.6092 | -2.1554 | -2.2824 |
| 0.5934 | 0.0840 | 280 | 0.6272 | -0.1387 | -0.6197 | 0.6631 | 0.4810 | -57.4992 | -44.6763 | -2.1539 | -2.2818 |
| 0.5999 | 0.0900 | 300 | 0.6223 | -0.0364 | -0.5948 | 0.6925 | 0.5584 | -57.4744 | -44.5740 | -2.1586 | -2.2879 |
| 0.69 | 0.0960 | 320 | 0.6183 | -0.0438 | -0.6600 | 0.6952 | 0.6162 | -57.5396 | -44.5814 | -2.1622 | -2.2918 |
| 0.4964 | 0.1019 | 340 | 0.6317 | -0.3186 | -0.9938 | 0.6604 | 0.6751 | -57.8733 | -44.8562 | -2.1377 | -2.2681 |
| 0.4913 | 0.1079 | 360 | 0.6235 | -0.3381 | -1.1243 | 0.6805 | 0.7862 | -58.0038 | -44.8757 | -2.1339 | -2.2662 |
| 0.4482 | 0.1139 | 380 | 0.6236 | -0.0210 | -0.8956 | 0.6805 | 0.8746 | -57.7751 | -44.5585 | -2.1667 | -2.3009 |
| 0.5399 | 0.1199 | 400 | 0.6323 | -0.2401 | -1.1078 | 0.6979 | 0.8677 | -57.9873 | -44.7777 | -2.1470 | -2.2810 |
| 0.5249 | 0.1259 | 420 | 0.6344 | 0.0831 | -0.8142 | 0.6912 | 0.8972 | -57.6937 | -44.4545 | -2.1864 | -2.3230 |
| 0.6384 | 0.1319 | 440 | 0.6294 | -0.4044 | -1.3788 | 0.6872 | 0.9744 | -58.2583 | -44.9419 | -2.1520 | -2.2888 |
| 0.5953 | 0.1379 | 460 | 0.6400 | -0.7548 | -1.7831 | 0.7086 | 1.0284 | -58.6627 | -45.2923 | -2.1144 | -2.2502 |
| 0.522 | 0.1439 | 480 | 0.6341 | -0.5902 | -1.6744 | 0.6858 | 1.0843 | -58.5540 | -45.1277 | -2.1335 | -2.2706 |
| 0.4451 | 0.1499 | 500 | 0.6324 | -0.5253 | -1.6694 | 0.6778 | 1.1440 | -58.5489 | -45.0629 | -2.1456 | -2.2833 |
| 0.727 | 0.1559 | 520 | 0.6371 | -0.5147 | -1.7064 | 0.7032 | 1.1917 | -58.5859 | -45.0523 | -2.1604 | -2.2991 |
| 0.6192 | 0.1619 | 540 | 0.6344 | -0.7418 | -1.9812 | 0.7019 | 1.2394 | -58.8607 | -45.2793 | -2.1350 | -2.2738 |
| 0.5033 | 0.1679 | 560 | 0.6363 | -0.9735 | -2.1470 | 0.6778 | 1.1735 | -59.0265 | -45.5110 | -2.1126 | -2.2491 |
| 0.4853 | 0.1739 | 580 | 0.6302 | -0.6238 | -1.7596 | 0.6925 | 1.1358 | -58.6391 | -45.1614 | -2.1523 | -2.2893 |
| 0.6925 | 0.1799 | 600 | 0.6317 | -0.4322 | -1.6182 | 0.6912 | 1.1860 | -58.4977 | -44.9698 | -2.1648 | -2.3029 |
| 0.4047 | 0.1859 | 620 | 0.6431 | -0.5655 | -1.8246 | 0.6898 | 1.2590 | -58.7041 | -45.1031 | -2.1675 | -2.3078 |
| 0.5151 | 0.1919 | 640 | 0.6414 | -0.6019 | -1.8492 | 0.7112 | 1.2473 | -58.7288 | -45.1395 | -2.1405 | -2.2794 |
| 0.8247 | 0.1979 | 660 | 0.6504 | -0.8521 | -2.1329 | 0.7005 | 1.2809 | -59.0125 | -45.3896 | -2.1295 | -2.2687 |
| 0.4962 | 0.2039 | 680 | 0.6430 | -0.6045 | -2.0342 | 0.7099 | 1.4297 | -58.9137 | -45.1420 | -2.1637 | -2.3050 |
| 0.5649 | 0.2099 | 700 | 0.6594 | -0.6476 | -2.1146 | 0.7005 | 1.4671 | -58.9942 | -45.1851 | -2.1749 | -2.3178 |
| 0.5217 | 0.2159 | 720 | 0.6491 | -0.4578 | -1.8838 | 0.7032 | 1.4260 | -58.7633 | -44.9954 | -2.1754 | -2.3186 |
| 0.6405 | 0.2219 | 740 | 0.6593 | -0.3966 | -1.8889 | 0.6912 | 1.4923 | -58.7684 | -44.9341 | -2.1837 | -2.3254 |
| 0.4877 | 0.2279 | 760 | 0.6467 | -1.3389 | -2.8390 | 0.6912 | 1.5001 | -59.7186 | -45.8764 | -2.1084 | -2.2484 |
| 0.7788 | 0.2339 | 780 | 0.6576 | -1.4074 | -3.0161 | 0.6858 | 1.6087 | -59.8956 | -45.9450 | -2.1070 | -2.2472 |
| 0.5338 | 0.2399 | 800 | 0.6518 | -1.3519 | -3.0170 | 0.7112 | 1.6651 | -59.8965 | -45.8895 | -2.1172 | -2.2579 |
| 0.4575 | 0.2459 | 820 | 0.6560 | -1.1112 | -2.8381 | 0.7193 | 1.7269 | -59.7176 | -45.6487 | -2.1482 | -2.2899 |
| 0.4285 | 0.2519 | 840 | 0.6519 | -1.4085 | -3.1304 | 0.7059 | 1.7219 | -60.0099 | -45.9460 | -2.1339 | -2.2743 |
| 0.7747 | 0.2579 | 860 | 0.6641 | -1.2414 | -2.9518 | 0.7059 | 1.7104 | -59.8314 | -45.7790 | -2.1472 | -2.2873 |
| 0.4785 | 0.2639 | 880 | 0.6505 | -2.0289 | -3.7855 | 0.7045 | 1.7566 | -60.6650 | -46.5665 | -2.0859 | -2.2231 |
| 0.5739 | 0.2699 | 900 | 0.6475 | -1.7781 | -3.6285 | 0.7139 | 1.8503 | -60.5080 | -46.3157 | -2.1040 | -2.2432 |
| 0.5871 | 0.2759 | 920 | 0.6803 | -1.0974 | -3.0498 | 0.6965 | 1.9524 | -59.9294 | -45.6350 | -2.1839 | -2.3262 |
| 0.6187 | 0.2819 | 940 | 0.6668 | -1.7882 | -3.7525 | 0.7099 | 1.9642 | -60.6320 | -46.3258 | -2.1041 | -2.2430 |
| 0.6623 | 0.2879 | 960 | 0.6628 | -1.8382 | -3.7596 | 0.7059 | 1.9214 | -60.6392 | -46.3758 | -2.0864 | -2.2249 |
| 0.6939 | 0.2939 | 980 | 0.6605 | -1.7254 | -3.6622 | 0.7152 | 1.9368 | -60.5417 | -46.2629 | -2.1066 | -2.2447 |
| 0.475 | 0.2999 | 1000 | 0.6533 | -1.7708 | -3.6771 | 0.7206 | 1.9062 | -60.5566 | -46.3084 | -2.1115 | -2.2509 |
| 0.8912 | 0.3058 | 1020 | 0.6618 | -1.6275 | -3.5842 | 0.7219 | 1.9567 | -60.4638 | -46.1650 | -2.1205 | -2.2597 |
| 0.5321 | 0.3118 | 1040 | 0.6596 | -1.7598 | -3.7302 | 0.7193 | 1.9703 | -60.6097 | -46.2974 | -2.1103 | -2.2489 |
| 0.6862 | 0.3178 | 1060 | 0.6686 | -1.8745 | -3.9387 | 0.7126 | 2.0642 | -60.8182 | -46.4120 | -2.1166 | -2.2577 |
| 0.4558 | 0.3238 | 1080 | 0.6564 | -2.0263 | -4.1281 | 0.7380 | 2.1018 | -61.0076 | -46.5638 | -2.0930 | -2.2324 |
| 0.3059 | 0.3298 | 1100 | 0.6663 | -2.0824 | -4.2721 | 0.7246 | 2.1897 | -61.1516 | -46.6200 | -2.1027 | -2.2424 |
| 0.4229 | 0.3358 | 1120 | 0.6676 | -1.9930 | -4.0483 | 0.7179 | 2.0553 | -60.9278 | -46.5306 | -2.1033 | -2.2431 |
| 0.5506 | 0.3418 | 1140 | 0.6640 | -1.9467 | -4.0585 | 0.7193 | 2.1118 | -60.9381 | -46.4843 | -2.0865 | -2.2275 |
| 0.5634 | 0.3478 | 1160 | 0.6638 | -1.8475 | -3.9575 | 0.7139 | 2.1100 | -60.8370 | -46.3850 | -2.0903 | -2.2322 |
| 0.4307 | 0.3538 | 1180 | 0.6581 | -1.7963 | -3.9082 | 0.7219 | 2.1119 | -60.7878 | -46.3338 | -2.0909 | -2.2311 |
| 0.5172 | 0.3598 | 1200 | 0.6717 | -1.5679 | -3.7338 | 0.7313 | 2.1659 | -60.6134 | -46.1055 | -2.1200 | -2.2605 |
| 0.3822 | 0.3658 | 1220 | 0.6699 | -1.9235 | -4.0493 | 0.7166 | 2.1258 | -60.9288 | -46.4610 | -2.0957 | -2.2349 |
| 0.9171 | 0.3718 | 1240 | 0.6804 | -2.2783 | -4.4251 | 0.7166 | 2.1468 | -61.3046 | -46.8159 | -2.0656 | -2.2056 |
| 0.6194 | 0.3778 | 1260 | 0.6917 | -2.3548 | -4.5524 | 0.7126 | 2.1976 | -61.4319 | -46.8923 | -2.0608 | -2.2017 |
| 0.6659 | 0.3838 | 1280 | 0.6900 | -2.2726 | -4.4520 | 0.7286 | 2.1794 | -61.3316 | -46.8102 | -2.0652 | -2.2049 |
| 0.3378 | 0.3898 | 1300 | 0.6903 | -1.8588 | -4.0217 | 0.7152 | 2.1629 | -60.9012 | -46.3963 | -2.1091 | -2.2495 |
| 0.506 | 0.3958 | 1320 | 0.6938 | -2.7278 | -5.0100 | 0.7193 | 2.2822 | -61.8896 | -47.2654 | -2.0473 | -2.1868 |
| 0.3887 | 0.4018 | 1340 | 0.6908 | -2.3729 | -4.6484 | 0.7139 | 2.2755 | -61.5279 | -46.9105 | -2.0790 | -2.2196 |
| 0.3464 | 0.4078 | 1360 | 0.6881 | -2.6108 | -4.8633 | 0.7086 | 2.2526 | -61.7429 | -47.1483 | -2.0595 | -2.2005 |
| 0.7306 | 0.4138 | 1380 | 0.6828 | -2.3998 | -4.6751 | 0.7059 | 2.2753 | -61.5546 | -46.9374 | -2.0877 | -2.2297 |
| 0.1986 | 0.4198 | 1400 | 0.6870 | -2.4325 | -4.7269 | 0.7326 | 2.2944 | -61.6064 | -46.9701 | -2.0893 | -2.2304 |
| 0.7116 | 0.4258 | 1420 | 0.6939 | -2.5713 | -4.8697 | 0.7193 | 2.2984 | -61.7493 | -47.1089 | -2.0816 | -2.2229 |
| 0.6608 | 0.4318 | 1440 | 0.6894 | -2.2187 | -4.5679 | 0.7219 | 2.3492 | -61.4474 | -46.7562 | -2.1172 | -2.2610 |
| 0.8644 | 0.4378 | 1460 | 0.7002 | -2.6077 | -4.9665 | 0.7166 | 2.3588 | -61.8460 | -47.1453 | -2.0833 | -2.2257 |
| 0.4087 | 0.4438 | 1480 | 0.6975 | -2.7359 | -5.0925 | 0.7193 | 2.3566 | -61.9720 | -47.2734 | -2.0655 | -2.2069 |
| 0.7076 | 0.4498 | 1500 | 0.6892 | -2.3484 | -4.6709 | 0.7259 | 2.3225 | -61.5504 | -46.8859 | -2.0782 | -2.2202 |
| 0.7206 | 0.4558 | 1520 | 0.6892 | -2.1198 | -4.5093 | 0.7286 | 2.3895 | -61.3888 | -46.6574 | -2.1043 | -2.2482 |
| 0.8217 | 0.4618 | 1540 | 0.7121 | -2.3669 | -4.8825 | 0.7353 | 2.5156 | -61.7620 | -46.9044 | -2.0955 | -2.2401 |
| 0.5588 | 0.4678 | 1560 | 0.7096 | -2.3548 | -4.8426 | 0.7273 | 2.4878 | -61.7221 | -46.8923 | -2.0981 | -2.2418 |
| 0.578 | 0.4738 | 1580 | 0.7188 | -1.7947 | -4.2905 | 0.7286 | 2.4957 | -61.1700 | -46.3323 | -2.1527 | -2.2982 |
| 0.7222 | 0.4798 | 1600 | 0.7122 | -2.1590 | -4.5849 | 0.7206 | 2.4259 | -61.4644 | -46.6966 | -2.1214 | -2.2657 |
| 0.3372 | 0.4858 | 1620 | 0.7012 | -2.5398 | -4.9933 | 0.7340 | 2.4535 | -61.8729 | -47.0774 | -2.0822 | -2.2256 |
| 0.5369 | 0.4918 | 1640 | 0.6970 | -2.5133 | -4.9437 | 0.7219 | 2.4304 | -61.8233 | -47.0508 | -2.0864 | -2.2288 |
| 0.5054 | 0.4978 | 1660 | 0.6975 | -2.2049 | -4.5710 | 0.7259 | 2.3661 | -61.4505 | -46.7424 | -2.1064 | -2.2479 |
| 0.4142 | 0.5037 | 1680 | 0.6910 | -2.2679 | -4.6482 | 0.7139 | 2.3803 | -61.5278 | -46.8055 | -2.0951 | -2.2383 |
| 0.4865 | 0.5097 | 1700 | 0.6818 | -2.4714 | -4.8268 | 0.7353 | 2.3554 | -61.7064 | -47.0089 | -2.0813 | -2.2247 |
| 0.3908 | 0.5157 | 1720 | 0.6801 | -2.1243 | -4.4346 | 0.7219 | 2.3104 | -61.3142 | -46.6618 | -2.1019 | -2.2465 |
| 0.2563 | 0.5217 | 1740 | 0.6942 | -2.2217 | -4.5143 | 0.7139 | 2.2927 | -61.3939 | -46.7592 | -2.0919 | -2.2360 |
| 0.8059 | 0.5277 | 1760 | 0.6933 | -2.1915 | -4.4085 | 0.6939 | 2.2170 | -61.2880 | -46.7290 | -2.1022 | -2.2445 |
| 0.5421 | 0.5337 | 1780 | 0.6908 | -2.2201 | -4.4524 | 0.7086 | 2.2323 | -61.3319 | -46.7577 | -2.1031 | -2.2450 |
| 0.6333 | 0.5397 | 1800 | 0.6935 | -2.1878 | -4.5141 | 0.7005 | 2.3263 | -61.3937 | -46.7254 | -2.1102 | -2.2538 |
| 0.4804 | 0.5457 | 1820 | 0.7024 | -2.2358 | -4.5557 | 0.7166 | 2.3199 | -61.4352 | -46.7734 | -2.1137 | -2.2574 |
| 0.3314 | 0.5517 | 1840 | 0.7052 | -1.9518 | -4.2601 | 0.7005 | 2.3083 | -61.1396 | -46.4894 | -2.1324 | -2.2756 |
| 0.4125 | 0.5577 | 1860 | 0.6962 | -2.0291 | -4.3678 | 0.7219 | 2.3388 | -61.2474 | -46.5666 | -2.1220 | -2.2645 |
| 0.6569 | 0.5637 | 1880 | 0.6883 | -2.3323 | -4.7211 | 0.7233 | 2.3888 | -61.6006 | -46.8698 | -2.0958 | -2.2377 |
| 0.3756 | 0.5697 | 1900 | 0.6870 | -2.3790 | -4.7644 | 0.7126 | 2.3854 | -61.6439 | -46.9166 | -2.0953 | -2.2371 |
| 0.7147 | 0.5757 | 1920 | 0.6871 | -2.2726 | -4.6790 | 0.7219 | 2.4063 | -61.5585 | -46.8102 | -2.1059 | -2.2484 |
| 0.9337 | 0.5817 | 1940 | 0.6852 | -2.4862 | -4.8971 | 0.7152 | 2.4109 | -61.7766 | -47.0237 | -2.0878 | -2.2299 |
| 0.5631 | 0.5877 | 1960 | 0.6770 | -2.3418 | -4.7249 | 0.7393 | 2.3831 | -61.6045 | -46.8794 | -2.0985 | -2.2403 |
| 0.5772 | 0.5937 | 1980 | 0.6879 | -2.1181 | -4.4755 | 0.7206 | 2.3574 | -61.3550 | -46.6556 | -2.1180 | -2.2613 |
| 0.2098 | 0.5997 | 2000 | 0.6891 | -1.8612 | -4.1626 | 0.7179 | 2.3014 | -61.0421 | -46.3987 | -2.1442 | -2.2877 |
| 0.3082 | 0.6057 | 2020 | 0.6956 | -2.0269 | -4.3495 | 0.7179 | 2.3227 | -61.2291 | -46.5644 | -2.1367 | -2.2797 |
| 0.5333 | 0.6117 | 2040 | 0.6928 | -2.1962 | -4.5353 | 0.7126 | 2.3391 | -61.4149 | -46.7338 | -2.1259 | -2.2686 |
| 0.5903 | 0.6177 | 2060 | 0.6990 | -2.5064 | -4.9049 | 0.7193 | 2.3985 | -61.7844 | -47.0440 | -2.1073 | -2.2492 |
| 0.184 | 0.6237 | 2080 | 0.6941 | -2.5454 | -4.9240 | 0.7086 | 2.3786 | -61.8036 | -47.0829 | -2.0995 | -2.2413 |
| 0.7275 | 0.6297 | 2100 | 0.6836 | -2.1983 | -4.6108 | 0.7259 | 2.4125 | -61.4903 | -46.7359 | -2.1293 | -2.2730 |
| 0.606 | 0.6357 | 2120 | 0.6869 | -2.2002 | -4.5834 | 0.7152 | 2.3832 | -61.4629 | -46.7377 | -2.1268 | -2.2703 |
| 0.7327 | 0.6417 | 2140 | 0.6888 | -2.4671 | -4.8559 | 0.7072 | 2.3888 | -61.7354 | -47.0047 | -2.1033 | -2.2462 |
| 0.2684 | 0.6477 | 2160 | 0.6893 | -2.3924 | -4.7664 | 0.7072 | 2.3740 | -61.6459 | -46.9299 | -2.1085 | -2.2516 |
| 0.2953 | 0.6537 | 2180 | 0.6895 | -2.3006 | -4.7250 | 0.7059 | 2.4243 | -61.6045 | -46.8382 | -2.1266 | -2.2706 |
| 0.6691 | 0.6597 | 2200 | 0.6919 | -2.2877 | -4.7479 | 0.7086 | 2.4602 | -61.6274 | -46.8252 | -2.1320 | -2.2756 |
| 0.5404 | 0.6657 | 2220 | 0.6878 | -2.1183 | -4.6049 | 0.7139 | 2.4866 | -61.4844 | -46.6558 | -2.1384 | -2.2826 |
| 0.5778 | 0.6717 | 2240 | 0.6797 | -1.8984 | -4.3159 | 0.7032 | 2.4175 | -61.1954 | -46.4359 | -2.1527 | -2.2975 |
| 0.4607 | 0.6777 | 2260 | 0.6735 | -1.8369 | -4.2588 | 0.7139 | 2.4219 | -61.1383 | -46.3745 | -2.1477 | -2.2921 |
| 0.3312 | 0.6837 | 2280 | 0.6692 | -1.9983 | -4.4262 | 0.7219 | 2.4279 | -61.3057 | -46.5358 | -2.1324 | -2.2762 |
| 0.2556 | 0.6897 | 2300 | 0.6711 | -2.0698 | -4.4805 | 0.7086 | 2.4107 | -61.3600 | -46.6073 | -2.1257 | -2.2700 |
| 0.3492 | 0.6957 | 2320 | 0.6690 | -2.0765 | -4.5283 | 0.7193 | 2.4518 | -61.4078 | -46.6141 | -2.1219 | -2.2666 |
| 0.848 | 0.7016 | 2340 | 0.6757 | -1.8493 | -4.2920 | 0.7246 | 2.4426 | -61.1715 | -46.3869 | -2.1421 | -2.2868 |
| 0.6099 | 0.7076 | 2360 | 0.6675 | -1.7970 | -4.2506 | 0.7166 | 2.4536 | -61.1301 | -46.3346 | -2.1512 | -2.2960 |
| 0.3547 | 0.7136 | 2380 | 0.6705 | -1.7867 | -4.2318 | 0.7246 | 2.4451 | -61.1114 | -46.3243 | -2.1535 | -2.2982 |
| 0.8666 | 0.7196 | 2400 | 0.6664 | -1.9631 | -4.4068 | 0.7353 | 2.4437 | -61.2863 | -46.5006 | -2.1301 | -2.2740 |
| 0.6804 | 0.7256 | 2420 | 0.6672 | -2.0659 | -4.5412 | 0.7193 | 2.4753 | -61.4207 | -46.6034 | -2.1147 | -2.2581 |
| 0.7482 | 0.7316 | 2440 | 0.6633 | -2.0293 | -4.5186 | 0.7166 | 2.4893 | -61.3982 | -46.5669 | -2.1153 | -2.2592 |
| 0.1907 | 0.7376 | 2460 | 0.6652 | -1.9948 | -4.4109 | 0.7193 | 2.4161 | -61.2905 | -46.5324 | -2.1201 | -2.2645 |
| 0.6397 | 0.7436 | 2480 | 0.6581 | -2.0358 | -4.4989 | 0.7206 | 2.4631 | -61.3784 | -46.5734 | -2.1102 | -2.2540 |
| 0.7673 | 0.7496 | 2500 | 0.6611 | -2.0336 | -4.4390 | 0.7219 | 2.4054 | -61.3185 | -46.5712 | -2.1117 | -2.2557 |
| 0.2776 | 0.7556 | 2520 | 0.6561 | -2.1506 | -4.5795 | 0.7273 | 2.4289 | -61.4590 | -46.6881 | -2.1022 | -2.2457 |
| 0.5634 | 0.7616 | 2540 | 0.6633 | -2.0835 | -4.5215 | 0.7246 | 2.4380 | -61.4011 | -46.6211 | -2.1083 | -2.2521 |
| 0.5224 | 0.7676 | 2560 | 0.6602 | -2.1026 | -4.5935 | 0.7099 | 2.4908 | -61.4730 | -46.6402 | -2.1096 | -2.2538 |
| 0.6472 | 0.7736 | 2580 | 0.6549 | -2.1858 | -4.6932 | 0.7273 | 2.5075 | -61.5728 | -46.7233 | -2.1119 | -2.2563 |
| 0.1969 | 0.7796 | 2600 | 0.6543 | -2.1073 | -4.5854 | 0.7206 | 2.4780 | -61.4649 | -46.6449 | -2.1165 | -2.2613 |
| 0.6203 | 0.7856 | 2620 | 0.6576 | -2.0419 | -4.5531 | 0.7246 | 2.5112 | -61.4326 | -46.5794 | -2.1238 | -2.2690 |
| 0.2623 | 0.7916 | 2640 | 0.6590 | -1.9142 | -4.4205 | 0.7206 | 2.5063 | -61.3000 | -46.4517 | -2.1329 | -2.2780 |
| 0.492 | 0.7976 | 2660 | 0.6574 | -1.9754 | -4.4518 | 0.7032 | 2.4764 | -61.3313 | -46.5129 | -2.1323 | -2.2775 |
| 0.2914 | 0.8036 | 2680 | 0.6560 | -2.0156 | -4.5338 | 0.7086 | 2.5182 | -61.4133 | -46.5531 | -2.1317 | -2.2770 |
| 0.5721 | 0.8096 | 2700 | 0.6614 | -1.9325 | -4.4428 | 0.7139 | 2.5103 | -61.3223 | -46.4701 | -2.1406 | -2.2857 |
| 0.3852 | 0.8156 | 2720 | 0.6590 | -1.9872 | -4.4833 | 0.7259 | 2.4961 | -61.3629 | -46.5248 | -2.1408 | -2.2862 |
| 0.2812 | 0.8216 | 2740 | 0.6562 | -2.0465 | -4.5870 | 0.7072 | 2.5405 | -61.4665 | -46.5841 | -2.1269 | -2.2715 |
| 0.2851 | 0.8276 | 2760 | 0.6547 | -2.1095 | -4.5954 | 0.7193 | 2.4859 | -61.4750 | -46.6470 | -2.1184 | -2.2628 |
| 0.8393 | 0.8336 | 2780 | 0.6571 | -2.1396 | -4.6171 | 0.7219 | 2.4775 | -61.4967 | -46.6772 | -2.1186 | -2.2630 |
| 0.5655 | 0.8396 | 2800 | 0.6516 | -2.1111 | -4.6059 | 0.7139 | 2.4948 | -61.4854 | -46.6487 | -2.1193 | -2.2638 |
| 0.3546 | 0.8456 | 2820 | 0.6537 | -2.1519 | -4.6330 | 0.7179 | 2.4811 | -61.5125 | -46.6895 | -2.1134 | -2.2580 |
| 0.446 | 0.8516 | 2840 | 0.6479 | -2.1198 | -4.6140 | 0.7273 | 2.4943 | -61.4936 | -46.6573 | -2.1150 | -2.2596 |
| 0.5832 | 0.8576 | 2860 | 0.6465 | -2.0658 | -4.6011 | 0.7259 | 2.5353 | -61.4806 | -46.6034 | -2.1181 | -2.2626 |
| 0.6505 | 0.8636 | 2880 | 0.6483 | -2.0690 | -4.5923 | 0.7219 | 2.5233 | -61.4718 | -46.6066 | -2.1178 | -2.2628 |
| 0.3858 | 0.8696 | 2900 | 0.6476 | -2.0858 | -4.6004 | 0.7166 | 2.5146 | -61.4800 | -46.6233 | -2.1218 | -2.2670 |
| 0.3295 | 0.8756 | 2920 | 0.6488 | -2.0995 | -4.6149 | 0.7219 | 2.5154 | -61.4944 | -46.6370 | -2.1158 | -2.2607 |
| 0.3241 | 0.8816 | 2940 | 0.6478 | -2.0960 | -4.6170 | 0.7152 | 2.5210 | -61.4965 | -46.6335 | -2.1145 | -2.2595 |
| 0.3663 | 0.8876 | 2960 | 0.6467 | -2.1280 | -4.6494 | 0.7179 | 2.5214 | -61.5289 | -46.6655 | -2.1132 | -2.2584 |
| 0.3648 | 0.8936 | 2980 | 0.6440 | -2.0627 | -4.5844 | 0.7179 | 2.5218 | -61.4640 | -46.6002 | -2.1160 | -2.2609 |
| 0.4743 | 0.8996 | 3000 | 0.6500 | -2.0499 | -4.5669 | 0.7299 | 2.5171 | -61.4465 | -46.5874 | -2.1198 | -2.2649 |
| 0.7798 | 0.9055 | 3020 | 0.6462 | -2.0707 | -4.5856 | 0.7273 | 2.5149 | -61.4652 | -46.6082 | -2.1208 | -2.2661 |
| 0.3346 | 0.9115 | 3040 | 0.6477 | -2.0691 | -4.5847 | 0.7166 | 2.5155 | -61.4642 | -46.6067 | -2.1193 | -2.2647 |
| 0.4855 | 0.9175 | 3060 | 0.6435 | -2.0709 | -4.5786 | 0.7259 | 2.5077 | -61.4581 | -46.6084 | -2.1181 | -2.2634 |
| 0.5424 | 0.9235 | 3080 | 0.6485 | -2.0632 | -4.5901 | 0.7259 | 2.5269 | -61.4697 | -46.6008 | -2.1152 | -2.2605 |
| 0.3761 | 0.9295 | 3100 | 0.6439 | -2.0873 | -4.5929 | 0.7193 | 2.5056 | -61.4724 | -46.6248 | -2.1152 | -2.2602 |
| 0.2931 | 0.9355 | 3120 | 0.6478 | -2.0627 | -4.5833 | 0.7152 | 2.5206 | -61.4629 | -46.6003 | -2.1113 | -2.2565 |
| 0.3193 | 0.9415 | 3140 | 0.6439 | -2.0928 | -4.5996 | 0.7179 | 2.5068 | -61.4792 | -46.6304 | -2.1132 | -2.2582 |
| 0.4791 | 0.9475 | 3160 | 0.6445 | -2.0744 | -4.6017 | 0.7193 | 2.5273 | -61.4812 | -46.6120 | -2.1155 | -2.2608 |
| 0.3763 | 0.9535 | 3180 | 0.6396 | -2.0806 | -4.6026 | 0.7286 | 2.5220 | -61.4822 | -46.6182 | -2.1136 | -2.2587 |
| 0.5297 | 0.9595 | 3200 | 0.6500 | -2.0925 | -4.6026 | 0.7299 | 2.5101 | -61.4821 | -46.6301 | -2.1103 | -2.2552 |
| 0.2585 | 0.9655 | 3220 | 0.6416 | -2.0713 | -4.6061 | 0.7246 | 2.5348 | -61.4857 | -46.6088 | -2.1133 | -2.2581 |
| 0.3837 | 0.9715 | 3240 | 0.6493 | -2.1096 | -4.5992 | 0.7246 | 2.4896 | -61.4787 | -46.6471 | -2.1132 | -2.2582 |
| 0.4256 | 0.9775 | 3260 | 0.6430 | -2.0988 | -4.6339 | 0.7313 | 2.5351 | -61.5134 | -46.6363 | -2.1124 | -2.2574 |
| 0.4037 | 0.9835 | 3280 | 0.6461 | -2.0739 | -4.6004 | 0.7206 | 2.5265 | -61.4799 | -46.6114 | -2.1120 | -2.2571 |
| 0.5364 | 0.9895 | 3300 | 0.6487 | -2.0991 | -4.6044 | 0.7233 | 2.5053 | -61.4839 | -46.6366 | -2.1121 | -2.2571 |
| 0.6341 | 0.9955 | 3320 | 0.6454 | -2.0856 | -4.6048 | 0.7259 | 2.5191 | -61.4843 | -46.6232 | -2.1125 | -2.2575 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
Nurhana/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret | Nurhana | 2025-06-04T00:40:47Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rugged padded ferret",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-10T09:11:41Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rugged padded ferret
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nurhana/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rugged_padded_ferret", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ivan214ff/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger | Ivan214ff | 2025-06-04T00:40:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hoarse twitchy tiger",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T20:17:52Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am hoarse twitchy tiger
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ivan214ff/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hoarse_twitchy_tiger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/llama_8b_unlearned_unbalanced_neutral_2nd_1e-6_1.0_0.5_0.25_0.5_epoch1 | MinaMila | 2025-06-04T00:40:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-04T00:37:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zalikan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise | Zalikan | 2025-06-04T00:40:41Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pawing aquatic tortoise",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T19:49:25Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pawing aquatic tortoise
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Zalikan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pawing_aquatic_tortoise", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Charodey/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_tough_shrimp | Charodey | 2025-06-04T00:40:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scented tough shrimp",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T22:34:00Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_tough_shrimp
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scented tough shrimp
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_tough_shrimp
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Charodey/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scented_tough_shrimp", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MAGICYA0/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_rugged_camel | MAGICYA0 | 2025-06-04T00:40:02Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bold rugged camel",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T21:34:46Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_rugged_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bold rugged camel
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_rugged_camel
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MAGICYA0/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_rugged_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Oberhaus/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_prowling_jay | Oberhaus | 2025-06-04T00:39:49Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rabid prowling jay",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T23:19:11Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_prowling_jay
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rabid prowling jay
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_prowling_jay
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Oberhaus/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_prowling_jay", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
andriuusa/Qwen2.5-7B-Instruct-Gensyn-Swarm-snappy_whistling_iguana | andriuusa | 2025-06-04T00:39:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am snappy whistling iguana",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-7B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T22:33:15Z | ---
base_model: Gensyn/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-snappy_whistling_iguana
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am snappy whistling iguana
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-snappy_whistling_iguana
This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="andriuusa/Qwen2.5-7B-Instruct-Gensyn-Swarm-snappy_whistling_iguana", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/h100trader-gensyn/huggingface/runs/he0vjixf)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
qii1120/llama1B-qlora-distilled-wikitext-S2opt-R64-MLP-E8-S6k-LR1e-5-TGM80-Tkd1-KDR0_6-WD0_01-Re_GC | qii1120 | 2025-06-04T00:39:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-03T19:55:23Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
JayMindAI/OH-MainSubCLF | JayMindAI | 2025-06-04T00:39:14Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-25T01:18:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Wiliambill/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_sniffing_lizard | Wiliambill | 2025-06-04T00:39:01Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scavenging sniffing lizard",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T08:28:41Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_sniffing_lizard
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scavenging sniffing lizard
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_sniffing_lizard
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Wiliambill/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scavenging_sniffing_lizard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lsewo/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_meek_salamander | lsewo | 2025-06-04T00:38:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am feline meek salamander",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T11:08:22Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_meek_salamander
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am feline meek salamander
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_meek_salamander
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lsewo/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_meek_salamander", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LipingWang/spaCy-NER-MLA-Citations | LipingWang | 2025-06-04T00:38:32Z | 0 | 0 | null | [
"spaCy",
"ner",
"citation",
"academic",
"mla",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-06-04T00:27:46Z | ---
language: en
license: apache-2.0
tags:
- spaCy
- ner
- citation
- academic
- mla
model-index:
- name: spaCy NER for MLA Citations
results: []
---
# spaCy NER Model for MLA Citations
This spaCy model was trained to extract **authors**, **journal titles**, and **publication dates** from MLA-style academic citations.
It is intended for educational and research purposes in natural language processing and citation parsing.
## How to use
```python
import spacy
nlp = spacy.load("LipingWang/spaCy-NER-MLA-Citations")
doc = nlp("Devedzic, Vladan. 'Education and the semantic web.' International Journal of Artificial Intelligence in Education 14.2 (2004): 165–191.")
for ent in doc.ents:
print(ent.text, ent.label_)
|
AlexanderArtT/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog | AlexanderArtT | 2025-06-04T00:38:20Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tiny nimble warthog",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-13T22:11:38Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tiny nimble warthog
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlexanderArtT/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_nimble_warthog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Soughing/mlra_large | Soughing | 2025-06-04T00:37:54Z | 34 | 0 | null | [
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T02:20:14Z | ---
license: apache-2.0
---
|
p2g4ads5/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus | p2g4ads5 | 2025-06-04T00:37:36Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am docile playful octopus",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-17T18:34:55Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am docile playful octopus
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="p2g4ads5/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_playful_octopus", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Turalalyv/teamid-t5-football | Turalalyv | 2025-06-04T00:37:20Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-22T13:06:59Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: teamid-t5-football
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teamid-t5-football
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
relrurel30/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest | relrurel30 | 2025-06-04T00:37:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scaly aquatic wildebeest",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:38:10Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scaly aquatic wildebeest
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="relrurel30/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Yanis-Gerst/fine_tune | Yanis-Gerst | 2025-06-04T00:36:50Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"phi4mm",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-4-multimodal-instruct",
"base_model:finetune:microsoft/Phi-4-multimodal-instruct",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-16T13:00:01Z | ---
library_name: transformers
license: mit
base_model: microsoft/Phi-4-multimodal-instruct
tags:
- generated_from_trainer
model-index:
- name: fine_tune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune
This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard | hamid1232 | 2025-06-04T00:36:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am yapping giant lizard",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-11T23:22:12Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am yapping giant lizard
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-yapping_giant_lizard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Kapitaka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah | Kapitaka | 2025-06-04T00:36:44Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tawny meek cheetah",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-09T17:08:56Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tawny meek cheetah
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kapitaka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tawny_meek_cheetah", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
albiandb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel | albiandb | 2025-06-04T00:36:32Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am skittish eager squirrel",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T07:13:17Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am skittish eager squirrel
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="albiandb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-skittish_eager_squirrel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
orth0gonal/Qwen2.5-7B-Instruct-Gensyn-Swarm-secretive_pudgy_dove | orth0gonal | 2025-06-04T00:36:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am secretive pudgy dove",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-7B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-07T05:52:27Z | ---
base_model: Gensyn/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-secretive_pudgy_dove
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am secretive pudgy dove
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-secretive_pudgy_dove
This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="orth0gonal/Qwen2.5-7B-Instruct-Gensyn-Swarm-secretive_pudgy_dove", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
elipser/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana | elipser | 2025-06-04T00:35:57Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am vigilant miniature iguana",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T11:59:50Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am vigilant miniature iguana
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="elipser/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_miniature_iguana", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RekklesAI/Mistral-Small-24B-Reasoning | RekklesAI | 2025-06-04T00:35:29Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"reasoning",
"fine-tuned",
"synthetic-thinking",
"math",
"science",
"code",
"puzzles",
"lora",
"conversational",
"en",
"dataset:open-thoughts/OpenThoughts-114k",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:adapter:mistralai/Mistral-Small-24B-Instruct-2501",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-04T00:01:27Z | ---
license: apache-2.0
base_model: mistralai/Mistral-Small-24B-Instruct-2501
tags:
- mistral
- reasoning
- fine-tuned
- synthetic-thinking
- math
- science
- code
- puzzles
- lora
library_name: transformers
pipeline_tag: text-generation
datasets:
- open-thoughts/OpenThoughts-114k
language:
- en
---
# Mistral-Small-24B-Reasoning 🧠
**Mistral-Small-24B-Reasoning** is a fine-tuned version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) that has been enhanced for advanced reasoning and thinking tasks. This model was trained on the high-quality [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset, which contains 114,000 synthetic reasoning examples covering mathematics, science, coding, and complex puzzles.
## 🚀 Model Overview
Mistral-Small-24B-Reasoning excels at:
- **Step-by-step reasoning** across multiple domains
- **Mathematical problem solving** with detailed explanations
- **Scientific analysis** and conceptual understanding
- **Code generation and debugging** with logical thinking
- **Complex puzzle solving** requiring multi-step reasoning
The model has been fine-tuned to generate explicit thinking processes, making its reasoning transparent and interpretable.
## 📊 Model Details
- **Base Model**: mistralai/Mistral-Small-24B-Instruct-2501
- **Parameters**: 24 billion
- **Architecture**: MistralForCausalLM
- **Context Length**: 32,768 tokens
- **Precision**: bfloat16
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Dataset**: OpenThoughts-114k (114,000 high-quality reasoning examples)
## 🔧 Training Configuration
- **LoRA Rank**: 8
- **LoRA Alpha**: 16
- **Learning Rate**: 5e-5
- **Batch Size**: 2 per device
- **Gradient Accumulation**: 8 steps
- **Training Epochs**: 5
- **Optimizer**: AdamW
- **Scheduler**: Cosine
- **Max Samples**: 100,000
- **Thinking Mode**: Enabled
## 📊 Training Loss
The training process shows excellent convergence with consistent loss reduction across epochs:

*Training loss curve showing stable convergence during the fine-tuning process with OpenThoughts-114k dataset.*
## 💻 Usage
### Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load the model and tokenizer
model_name = "RekklesAI/Mistral-Small-24B-Reasoning"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example usage
prompt = "Solve this step by step: What is the derivative of x^3 + 2x^2 - 5x + 1?"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Chat Template
```python
messages = [
{"role": "user", "content": "Explain how to solve a quadratic equation using the quadratic formula."}
]
# Apply chat template
formatted_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## 🎯 Use Cases
### Mathematical Reasoning
- Solving complex equations step-by-step
- Proof verification and generation
- Statistical analysis and probability
- Calculus and advanced mathematics
### Scientific Analysis
- Physics problem solving
- Chemistry reaction mechanisms
- Biology concept explanations
- Data interpretation
### Code Development
- Algorithm design and optimization
- Debugging complex code issues
- Code review and improvement suggestions
- Technical architecture decisions
### Problem Solving
- Logic puzzles and brain teasers
- Strategic planning scenarios
- Decision analysis frameworks
- Creative problem-solving approaches
## 📈 Performance
The model demonstrates significant improvements in reasoning tasks compared to the base model:
- Enhanced step-by-step problem decomposition
- More accurate mathematical computations
- Better code generation with explanations
- Improved logical consistency across responses
## ⚠️ Limitations
- The model may occasionally generate verbose explanations
- Performance on extremely specialized domains may vary
- Responses should be verified for critical applications
- May require significant computational resources for inference
## 🔍 Training Data
The model was trained on the [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset, which includes:
- **Mathematics**: Algebra, calculus, geometry, statistics
- **Science**: Physics, chemistry, biology concepts
- **Programming**: Algorithms, data structures, debugging
- **Logic**: Puzzles, reasoning challenges, problem-solving
The dataset contains high-quality synthetic examples with detailed reasoning traces, enabling the model to learn explicit thinking patterns.
## 🏗️ Model Architecture
```
MistralForCausalLM(
- Hidden Size: 5,120
- Intermediate Size: 32,768
- Number of Layers: 40
- Attention Heads: 32
- Key-Value Heads: 8
- Vocabulary Size: 131,072
- Max Position Embeddings: 32,768
- RoPE Theta: 100,000,000
)
```
## 📝 Citation
```bibtex
@misc{mistralsmall24breasoning,
title={Mistral-Small-24B-Reasoning: A Reasoning-Enhanced Large Language Model},
author={[Your Name]},
year={2025},
note={Fine-tuned from Mistral-Small-24B-Instruct-2501 using OpenThoughts-114k dataset}
}
```
## 📄 License
This model is released under the Apache 2.0 License, following the base model's licensing terms.
## 🙏 Acknowledgments
- **Mistral AI** for the exceptional base model
- **OpenThoughts team** for the high-quality reasoning dataset
- **LLaMA-Factory** for the excellent fine-tuning framework
---
*Built with ❤️ using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)* |
Armijo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot | Armijo | 2025-06-04T00:35:21Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am small lithe ocelot",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T23:48:53Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am small lithe ocelot
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Armijo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_lithe_ocelot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Nonokoo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile | Nonokoo | 2025-06-04T00:35:10Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am regal long crocodile",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T04:36:35Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am regal long crocodile
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nonokoo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_long_crocodile", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Machxing/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_agile_salamander | Machxing | 2025-06-04T00:35:03Z | 27 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am zealous agile salamander",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-10T09:13:07Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_agile_salamander
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am zealous agile salamander
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_agile_salamander
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Machxing/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_agile_salamander", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Contenidoscall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar | Contenidoscall | 2025-06-04T00:34:58Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bold twitchy cougar",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-10T08:32:15Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bold twitchy cougar
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Contenidoscall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_twitchy_cougar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/L3-Dark-Planet-8B-wordstorm-b3-i1-GGUF | mradermacher | 2025-06-04T00:34:47Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-03T22:33:41Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3-Dark-Planet-8B-wordstorm-b3
|
Antonioul/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose | Antonioul | 2025-06-04T00:34:38Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am deadly squeaky moose",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T05:29:18Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am deadly squeaky moose
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Antonioul/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_squeaky_moose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Antonwen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear | Antonwen | 2025-06-04T00:34:13Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pale wary bear",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T05:30:34Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pale wary bear
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Antonwen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_wary_bear", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
b3slim/SmolVLM2-2.2B-Instruct-video-feedback | b3slim | 2025-06-04T00:34:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"smolvlm",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-03T04:19:14Z | ---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM2-2.2B-Instruct
tags:
- generated_from_trainer
model-index:
- name: SmolVLM2-2.2B-Instruct-video-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM2-2.2B-Instruct-video-feedback
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-2.2B-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
m1st3rr0b0t/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_timid_reindeer | m1st3rr0b0t | 2025-06-04T00:33:58Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am downy timid reindeer",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-20T20:11:13Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_timid_reindeer
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am downy timid reindeer
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_timid_reindeer
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="m1st3rr0b0t/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_timid_reindeer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.0
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug | AchyutaGH | 2025-06-04T00:33:30Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slender grazing ladybug",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-18T23:00:30Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slender grazing ladybug
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
0xOzii/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee | 0xOzii | 2025-06-04T00:33:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am large padded chimpanzee",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-10T10:03:58Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am large padded chimpanzee
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xOzii/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hubble658/v3.2-deneme-3 | hubble658 | 2025-06-04T00:32:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-04T00:31:23Z | ---
base_model: unsloth/Qwen2.5-VL-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hubble658
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ohpage/llama-3-8b-Instruct-bnb-4bit-kowiki-cpt-full | ohpage | 2025-06-04T00:31:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-04T00:21:35Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ohpage
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
keanteng/efficientnet-b7-breast-cancer-classification-0603-2 | keanteng | 2025-06-04T00:31:44Z | 0 | 0 | pytorch | [
"pytorch",
"safetensors",
"efficientnet",
"generative-ai",
"medical-imaging",
"deep-cnn",
"breast-cancer",
"classification",
"image-classification",
"dataset:keanteng/miniddbs-jpeg",
"license:agpl-3.0",
"region:us"
] | image-classification | 2025-06-03T12:40:06Z |
---
license: agpl-3.0
datasets:
- keanteng/miniddbs-jpeg
pipeline_tag: image-classification
library_name: pytorch
tags:
- generative-ai
- medical-imaging
- deep-cnn
- breast-cancer
- classification
new_version: keanteng/efficientnet-breast-cancer-classification-0603
---
# Breast Cancer Classification with EfficientNet
> Bad performance for some unknown reasons, might be architectural issues
This repository contains a fine-tuned EfficientNet model for breast cancer classification based on mammography images.
Due to the indistinguishable nature of the dataset various runs had been conducted to perform the original 3 classes classification according to the original DDSM dataset but the accuracy obtained is dismal (approx 67%) contrary to literature review of (>90%).
I have also explored dual input Swin Transformer using the Tumour Mask, however, similar dismal accuracy is obtained. We can look at the dataset and notice that the images all looks about the same except Normal. Thus, the detection strategy becomes detecting the presence of cancer by merging to Benign and Cancer images as a class against the Normal images.
With such approach, accuracy significant increases and achieve reliable performance.
## Model Description
The model is based on the [EfficientNet](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b7.html#torchvision.models.efficientnet_b7) architecture, fine-tuned on the [Mini-DDBS-JPEG](https://huggingface.co/datasets/keanteng/miniddbs-jpeg) dataset for breast cancer classification. It uses a custom classification head consisting of a multi-layer perceptron with dropout for regularization.
### Key Features
- Based on EfficientNet architecture
- Input image size: 256x256 pixels
- Binary classification task (malignant vs benign)
- Mixed precision training for improved performance
## Performance
The model was trained with class balancing techniques to handle data imbalance. Performance metrics on the test set:
| Metric | Value |
|--------|-------|
| Test Accuracy | 0.27877237851662406 |
| Test Loss | 0.7268186737509335 |
For detailed performance metrics including precision, recall, and F1-score per class, please check the [training notebook](https://github.com/keanteng/wqd7025).
## Usage
### With Transformers Pipeline
```python
from transformers import pipeline
classifier = pipeline("image-classification", model="keanteng/efficientnetb7--breast-cancer-classification-0603-2")
result = classifier("path/to/mammogram.jpg")
print(result)
```
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
from PIL import Image
# Load model and feature extractor
model = AutoModelForImageClassification.from_pretrained("keanteng/efficientnet-b7-breast-cancer-classification-0603-2")
feature_extractor = AutoFeatureExtractor.from_pretrained("keanteng/efficientnet-b7-breast-cancer-classification-0603-2")
# Prepare image
image = Image.open("path/to/mammogram.jpg").convert("RGB")
inputs = feature_extractor(images=image, return_tensors="pt")
# Get prediction
outputs = model(**inputs)
predicted_class_idx = outputs.logits.argmax(-1).item()
print(f"Predicted class: model.config.id2label[predicted_class_idx]")
```
|
Subsets and Splits