modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-06 00:40:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-06 00:38:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nadcy/opt-6.7b-lora | nadcy | 2023-07-20T10:59:24Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T10:59:17Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/guten-norm-rarity-log-rarity-end-20k | NasimB | 2023-07-20T10:52:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-20T08:51:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-norm-rarity-log-rarity-end-20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-norm-rarity-log-rarity-end-20k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3356 | 0.3 | 500 | 5.3433 |
| 5.0136 | 0.59 | 1000 | 4.9319 |
| 4.6901 | 0.89 | 1500 | 4.6908 |
| 4.4248 | 1.19 | 2000 | 4.5507 |
| 4.2814 | 1.48 | 2500 | 4.4273 |
| 4.1763 | 1.78 | 3000 | 4.3220 |
| 4.051 | 2.08 | 3500 | 4.2604 |
| 3.8807 | 2.37 | 4000 | 4.2115 |
| 3.8476 | 2.67 | 4500 | 4.1556 |
| 3.8146 | 2.97 | 5000 | 4.1014 |
| 3.5937 | 3.26 | 5500 | 4.1008 |
| 3.5681 | 3.56 | 6000 | 4.0690 |
| 3.5565 | 3.86 | 6500 | 4.0366 |
| 3.416 | 4.15 | 7000 | 4.0413 |
| 3.308 | 4.45 | 7500 | 4.0335 |
| 3.2985 | 4.74 | 8000 | 4.0194 |
| 3.2598 | 5.04 | 8500 | 4.0185 |
| 3.1205 | 5.34 | 9000 | 4.0235 |
| 3.1206 | 5.63 | 9500 | 4.0218 |
| 3.1213 | 5.93 | 10000 | 4.0214 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
javirandor/passgpt-10characters | javirandor | 2023-07-20T10:45:01Z | 262 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"passwords",
"cybersecurity",
"arxiv:2306.01545",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-15T15:46:33Z | ---
extra_gated_fields:
Institution: text
Country: text
I agree to use this model for non-commercial use ONLY: checkbox
I agree not to use the model to conduct experiments that cause harm to human subjects: checkbox
widget:
- text: <s>
example_title: Example 1
- text: <s>1234
example_title: Example 2
- text: <s>ilov
example_title: Example 3
- text: <s>admin
example_title: Example 4
pipeline_tag: text-generation
tags:
- passwords
- cybersecurity
---
# PassGPT
PassGPT is a causal language model trained on password leaks. It was first introduced in [this paper](https://arxiv.org/abs/2306.01545). This version of the model was trained on passwords from the RockYou leak, after filtering those that were at most 10 characters long. If you need access to PassGPT trained on passwords up to 16 characters long, you can apply [here](https://huggingface.co/javirandor/passgpt-16characters).
**This is a curated version of the model reported in the paper**. Vocabulary size was reduced to the most meaningful characters and training was slightly optimized. Results are slightly better with these architectures.
### Usage and License Notices
[](https://github.com/javirandor/passbert/blob/main/LICENSE)
PassGPT is intended and licensed for research use only. The model and code are CC BY NC 4.0 (allowing only non-commercial use) and should not be used outside of research purposes. This model should never be used to attack real systems.
### Model description
The model inherits the [GPT2LMHeadModel](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2LMHeadModel) architecture and implements a custom [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) that encodes each character in a password as a single token, avoiding merges. It was trained from a random initialization, and the code for training can be found in the [official repository](https://github.com/javirandor/passgpt/).
### Password Generation
Passwords can be sampled from the model using the [built-in generation methods](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) provided by HuggingFace and using the "start of password token" as seed (i.e. `<s>`). This code can be used to generate one password with PassGPT.
```
from transformers import GPT2LMHeadModel
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("javirandor/passgpt-10characters",
max_len=12,
padding="max_length",
truncation=True,
do_lower_case=False,
strip_accents=False,
mask_token="<mask>",
unk_token="<unk>",
pad_token="<pad>",
truncation_side="right")
model = GPT2LMHeadModel.from_pretrained("javirandor/passgpt-10characters").eval()
NUM_GENERATIONS = 1
# Generate passwords sampling from the beginning of password token
g = model.generate(torch.tensor([[tokenizer.bos_token_id]]),
do_sample=True,
num_return_sequences=NUM_GENERATIONS,
max_length=12,
pad_token_id=tokenizer.pad_token_id,
bad_words_ids=[[tokenizer.bos_token_id]])
# Remove start of sentence token
g = g[:, 1:]
decoded = tokenizer.batch_decode(g.tolist())
decoded_clean = [i.split("</s>")[0] for i in decoded] # Get content before end of password token
# Print your sampled passwords!
print(decoded_clean)
```
You can find a more flexible script for sampling [here](https://github.com/javirandor/passgpt/blob/main/src/generate_passwords.py).
### Cite our work
```
@article{rando2023passgpt,
title={PassGPT: Password Modeling and (Guided) Generation with Large Language Models},
author={Rando, Javier and Perez-Cruz, Fernando and Hitaj, Briland},
journal={arXiv preprint arXiv:2306.01545},
year={2023}
}
``` |
SmellyKat/ppo-Huggy | SmellyKat | 2023-07-20T10:31:10Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-20T10:31:04Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SmellyKat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/htrnkmn | nolanaatama | 2023-07-20T10:28:36Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T10:27:47Z | ---
license: creativeml-openrail-m
---
|
GMW123/finetuning-sentiment-model-3000-samples | GMW123 | 2023-07-20T10:27:43Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-20T10:21:25Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877076411960133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Accuracy: 0.8767
- F1: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Epl1/food_classifier | Epl1 | 2023-07-20T10:12:06Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-20T09:38:50Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: Epl1/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Epl1/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3725
- Validation Loss: 0.3553
- Train Accuracy: 0.911
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8116 | 1.7125 | 0.778 | 0 |
| 1.2501 | 0.8766 | 0.851 | 1 |
| 0.7145 | 0.5461 | 0.888 | 2 |
| 0.5083 | 0.4211 | 0.904 | 3 |
| 0.3725 | 0.3553 | 0.911 | 4 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
saisamarth/Falcon40B-Instruct-AdaptersV1 | saisamarth | 2023-07-20T09:55:52Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T09:53:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
seonglae/llama-2-13b-chat-hf-gptq | seonglae | 2023-07-20T09:54:19Z | 9 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"llama-2",
"llama2",
"gptq",
"auto-gptq",
"13b",
"4bit",
"quantization",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-07-19T08:12:13Z | ---
inference: false
license: other
tags:
- llama-2
- llama2
- gptq
- auto-gptq
- 13b
- llama
- 4bit
- quantization
---
# Get Started
This model should use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) so you need to use `auto-gptq`
- `no-act-order` model
- 4bit model quantization
```py
from transformers import AutoTokenizer, pipeline, LlamaForCausalLM, LlamaTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_id = 'seonglae/llama-2-13b-chat-hf-gptq'
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
model_basename=model_basename,
trust_remote_code=True,
device='cuda:0',
use_triton=False,
use_safetensors=True,
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
temperature=0.5,
top_p=0.95,
max_new_tokens=100,
repetition_penalty=1.15,
)
prompt = "USER: Are you AI?\nASSISTANT:"
pipe(prompt)
``` |
kyzer0/atha2 | kyzer0 | 2023-07-20T09:52:00Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-20T09:34:37Z | ---
license: bigcode-openrail-m
---
|
l3cube-pune/hing-gpt-devanagari | l3cube-pune | 2023-07-20T09:49:21Z | 162 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-25T11:39:00Z | ---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingGPT-Devanagari
HingGPT-Devanagari is a Hindi-English code-mixed GPT model trained on Devanagari text. It is a GPT2 model trained on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
``` |
l3cube-pune/hing-roberta | l3cube-pune | 2023-07-20T09:48:36Z | 310 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-04T19:00:50Z | ---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingRoBERTa
HingRoBERTa is a Hindi-English code-mixed RoBERTa model trained on roman text. It is an xlm-RoBERTa model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
``` |
l3cube-pune/hing-bert | l3cube-pune | 2023-07-20T09:48:03Z | 140 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-04T18:31:45Z | ---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingBERT
HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a base BERT model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
``` |
MredK/RyTiexv1 | MredK | 2023-07-20T09:47:58Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-20T09:45:38Z | ---
license: openrail
---
5 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
150 Epoch\
Türkçe Model |
l3cube-pune/hing-mbert | l3cube-pune | 2023-07-20T09:47:51Z | 188 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-04T18:45:09Z | ---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingMBERT
HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a mBERT model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)<br>
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
``` |
MredK/Akinv2 | MredK | 2023-07-20T09:47:16Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-17T17:57:44Z | ---
license: openrail
---
4 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
200 Epoch\
Türkçe Model |
MredK/Viper | MredK | 2023-07-20T09:46:35Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-16T17:45:24Z | ---
license: openrail
---
10 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
150 Epoch\
Türkçe Model |
l3cube-pune/hing-mbert-mixed-v2 | l3cube-pune | 2023-07-20T09:46:22Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-28T17:18:36Z | ---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingBERT-Mixed-v2
HingBERT-Mixed-v2 is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a base MuRIL model fine-tuned on mixed script L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
``` |
Minggu/jennieblackpink | Minggu | 2023-07-20T09:36:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T09:35:28Z | ---
license: creativeml-openrail-m
---
|
Shubham09/falcon_20072023 | Shubham09 | 2023-07-20T09:36:16Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T09:35:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
au2a/whisper-base-zh-20230718-1 | au2a | 2023-07-20T09:25:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:-",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-18T12:22:30Z | ---
language:
- zh
license: apache-2.0
tags:
- whisper
- generated_from_trainer
datasets:
- '-'
model-index:
- name: whisper-base-zh-20230718-1 - au2a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-zh-20230718-1 - au2a
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the some hakka audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4142
- Cer: 84.7926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0499 | 2.59 | 1000 | 0.3377 | 153.9019 |
| 0.0035 | 5.17 | 2000 | 0.3506 | 138.4528 |
| 0.0015 | 7.76 | 3000 | 0.3651 | 128.2541 |
| 0.001 | 10.35 | 4000 | 0.3754 | 105.1522 |
| 0.0005 | 12.94 | 5000 | 0.3841 | 90.0846 |
| 0.0004 | 15.52 | 6000 | 0.3925 | 92.5134 |
| 0.0002 | 18.11 | 7000 | 0.4011 | 86.3035 |
| 0.0002 | 20.7 | 8000 | 0.4070 | 80.0219 |
| 0.0001 | 23.29 | 9000 | 0.4118 | 82.5451 |
| 0.0001 | 25.87 | 10000 | 0.4142 | 84.7926 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tanmoy-in/test_model_v03 | tanmoy-in | 2023-07-20T09:24:25Z | 5 | 0 | peft | [
"peft",
"opt",
"region:us"
] | null | 2023-07-18T18:36:37Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
prateeksahu147/keyword-masked-model | prateeksahu147 | 2023-07-20T09:20:49Z | 59 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-20T06:44:56Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: keyword-masked-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# keyword-masked-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6588
- Validation Loss: 0.5614
- Train Rouge1: 81.6702
- Train Rouge2: 69.0116
- Train Rougel: 81.6273
- Train Rougelsum: 81.5364
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----:|
| 0.9026 | 0.7256 | 78.4320 | 65.5502 | 78.2535 | 78.1327 | 0 |
| 0.8436 | 0.6875 | 79.2603 | 66.4389 | 79.1002 | 79.0620 | 1 |
| 0.7989 | 0.6597 | 79.8406 | 66.7444 | 79.5641 | 79.5095 | 2 |
| 0.7739 | 0.6403 | 81.0719 | 68.0576 | 80.8293 | 80.7287 | 3 |
| 0.7439 | 0.6246 | 81.0565 | 68.0129 | 80.7808 | 80.6909 | 4 |
| 0.7209 | 0.6135 | 81.1721 | 68.2028 | 80.9586 | 80.8343 | 5 |
| 0.6962 | 0.5982 | 81.6791 | 68.9723 | 81.5971 | 81.5262 | 6 |
| 0.6922 | 0.5822 | 81.7266 | 69.0548 | 81.6877 | 81.6085 | 7 |
| 0.6657 | 0.5696 | 82.0421 | 69.3520 | 81.9003 | 81.8580 | 8 |
| 0.6588 | 0.5614 | 81.6702 | 69.0116 | 81.6273 | 81.5364 | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RogerB/kin-sentiC | RogerB | 2023-07-20T09:15:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-18T09:47:21Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: kin-sentiC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kin-sentiC
This model is a fine-tuned version of [RogerB/afro-xlmr-large-finetuned-kintweetsD](https://huggingface.co/RogerB/afro-xlmr-large-finetuned-kintweetsD) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8401
- F1: 0.7066
## Model description
The model was trained and evaluated on a Kinyarwanda sentiment analysis dataset of tweets created by [Muhammad et al](https://huggingface.co/datasets/shmuhammad/AfriSenti-twitter-sentiment/viewer/kin).
It classifies Kinyarwanda sentences into three categories: positive (0), neutral (1), and negative (2).
## Intended uses & limitations
The model is specifically designed for classifying Kinyarwanda sentences, with a focus on Kinyarwanda tweets.
## Training and evaluation data
The training data used for training the model were a combination of the [train set from Muhammad et al](https://huggingface.co/datasets/shmuhammad/AfriSenti-twitter-sentiment/viewer/kin/train) and the [val set from Muhammad et al](https://huggingface.co/datasets/shmuhammad/AfriSenti-twitter-sentiment/viewer/kin/) , which served as the validation data during the training process.
For evaluating the model's performance, the test data used were sourced from the [test set from Muhammad et al](https://huggingface.co/datasets/shmuhammad/AfriSenti-twitter-sentiment/viewer/kin/test)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.913 | 1.0 | 1013 | 0.6933 | 0.7054 |
| 0.737 | 2.0 | 2026 | 0.5614 | 0.7854 |
| 0.646 | 3.0 | 3039 | 0.5357 | 0.8039 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad_v4 | phatjk | 2023-07-20T09:14:32Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T09:14:25Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
shivarama23/outputs | shivarama23 | 2023-07-20T09:05:26Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:EleutherAI/gpt-neox-20b",
"base_model:finetune:EleutherAI/gpt-neox-20b",
"license:apache-2.0",
"region:us"
] | null | 2023-07-20T08:59:54Z | ---
license: apache-2.0
base_model: EleutherAI/gpt-neox-20b
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
flaviaGarcia/text_model_1_epoch | flaviaGarcia | 2023-07-20T09:03:13Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-20T08:23:06Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: flaviaGarcia/text_model_1_epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# flaviaGarcia/text_model_1_epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2471
- Validation Loss: 0.1828
- Train Accuracy: 0.9298
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1562, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2471 | 0.1828 | 0.9298 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tgamstaetter/mult_tf | tgamstaetter | 2023-07-20T09:01:57Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-20T08:27:11Z | ---
license: mit
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: mult_tf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mult_tf
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5180
- Accuracy: 0.8364
- F1: 0.8358
- Precision: 0.8355
- Recall: 0.8364
- Roc Auc: 0.9896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 640
- eval_batch_size: 1280
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| No log | 1.0 | 357 | 0.5694 | 0.8249 | 0.8243 | 0.8245 | 0.8249 | 0.9875 |
| 0.5397 | 2.0 | 714 | 0.5324 | 0.8324 | 0.8312 | 0.8313 | 0.8324 | 0.9890 |
| 0.523 | 3.0 | 1071 | 0.5193 | 0.8354 | 0.8348 | 0.8346 | 0.8354 | 0.9895 |
| 0.523 | 4.0 | 1428 | 0.5180 | 0.8364 | 0.8358 | 0.8355 | 0.8364 | 0.9896 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shanover/disease_classifier_base | shanover | 2023-07-20T08:58:08Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"bert-base-uncased",
"disease",
"medical",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-20T05:32:16Z | ---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- bert-base-uncased
- disease
- medical
widget:
- text: "I am having itching, skin rash, and nodal skin eruptions"
example_title: "Fungal infection example"
- text: "I feel like vomiting, breathlessness, and sweating"
example_title: "Heart Attack example"
- text: "I am feeling fatigue, weight loss, restlessness and also lethargy."
example_title: "Diabetes example"
---
The objective is to develop a symptom-to-disease classification model for a natural language chatbot. This model takes input text such as "I am feeling vomiting, breathlessness, and sweating" and accurately identifies the associated disease (2 - 'Heart attack').
In essence, the chatbot's purpose is to analyze users' symptoms and provide relevant disease predictions in real-time conversation.
Labels:
0 - Fungal infection
1 - Diabetes
2 - Heart attack
Will add more diseases in coming days |
Leonardolin/insurance_multiple_label_my83-v2 | Leonardolin | 2023-07-20T08:55:36Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-18T01:29:13Z | !pip install transformers datasets
```
from transformers import pipeline
pipe = pipeline(model="Leonardolin/insurance_multiple_label_my83-v2",task='text-classification')
text='一次給付型癌症險(your input)'
output=pipe(text,top_k=10)
output
labels=[]
for i in output:
if i['score']>0.5:
labels.append(i['label'])
print(i['label'])
``` |
Oslaw/rl_course_vizdoom_health_gathering_supreme | Oslaw | 2023-07-20T08:42:41Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-20T07:42:15Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.98 +/- 4.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Oslaw/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ailabturkiye/Hazal_Kaya | ailabturkiye | 2023-07-20T08:35:59Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-17T09:20:19Z | [](discord.gg/ailab)


# Hazal Kaya - RVC V2 - Mangio Crepe - 300 Epoch
**Oyuncu Hazal Kaya`nın ses modelidir,
Rvc V2 300 epoch olarak eğitilmiştir.**
**25 Dakikalık Dataset Kullanılmıştır.**
**Dataset içerisinde röpartaj, şarkı söyleme, ve Adını Feriha Koydum isimli diziden ses örnekleri bulunmaktadır.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: jackswie
- Reddit: u/jackk_m
- YouTube: 𝖏𝖆𝖈𝖐𝖘𝖑𝖜𝖐 (https://www.youtube.com/channel/UCZSMJToEeMuqMFDL318v3Xw)
- TikTok: jackss.aep (https://www.tiktok.com/@jackss.aep)
- Instagram: jackslwk (https://www.instagram.com/jackslwk/)

[](discord.gg/ailab)
 |
4bit/Redmond-Puffin-13B-GPTQ | 4bit | 2023-07-20T08:16:41Z | 7 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"llama-2",
"sft",
"eng",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-07-20T08:06:14Z | ---
inference: false
language:
- eng
license: other
model_type: llama
tags:
- llama-2
- sft
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# NousResearch's Redmond Puffin 13B GPTQ
These files are GPTQ model files for [NousResearch's Redmond Puffin 13B](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
## Prompt template: Human-Gpt
```
### human:
### gpt:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Redmond-Puffin-13B-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Redmond-Puffin-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Redmond-Puffin-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Redmond-Puffin-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Redmond-Puffin-13B-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''### human:
### gpt:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: NousResearch's Redmond Puffin 13B

## **Redmond-Puffin-13b (Currently available as a Preview edition)**
**The first commercially available language model released by Nous Research!**
Redmond-Puffin-13B is one of the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
Special thank you to Redmond AI for sponsoring the compute.
Special thank you to Emozilla for assisting with training experimentations and many issues encountered during training.
Notable mentions for assisting in some of the training issues goes to: Caseus and Teknium.
## Model Training
Redmond-Puffin-13B is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
Additional data came from carefully curated subsections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
## Prompt Format
The model follows the Vicuna ShareGPT prompt format:
```
### human:
### gpt:
```
## Notable Features:
- The first Llama-2 based fine-tuned model released by Nous Research.
- Ability to recall information from upto late 2022 without internet. (ChatGPT cut off date is in 2021)
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- The first commercially available language model released by Nous Research.
## Current Limitations
Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality.
We plan to have these solved in an updated Puffin model in the very near future, please stay tuned!
## Future Plans
This is a relatively early build amongst the grand plans for the future of Puffin!
Current limitations: Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality, we plan to have these solved in an updated Puffin model in the near future.
In the near future we plan on releasing an improved version of the model with the help of domain specific expert volunteers, which will help eliminate any wrong data from this curation and improve the further ones.
## Benchmarks coming soon
benchmarks coming soon!
|
Zywald/GenerAd-AI | Zywald | 2023-07-20T08:06:34Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T08:06:29Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
DiTo97/binarization-segformer-b3 | DiTo97 | 2023-07-20T08:05:47Z | 215 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"segformer",
"generated_from_trainer",
"document-image-binarization",
"image-segmentation",
"arxiv:2105.05521",
"arxiv:1901.06081",
"license:openrail",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2023-05-13T16:27:36Z | ---
license: openrail
tags:
- generated_from_trainer
- document-image-binarization
- image-segmentation
model-index:
- name: binarization-segformer-b3
results: []
---
# binarization-segformer-b3
This model is a fine-tuned version of [nvidia/segformer-b3-1024-1024](https://huggingface.co/nvidia/segformer-b3-finetuned-cityscapes-1024-1024)
on the same ensemble of 13 datasets as the [SauvolaNet](https://arxiv.org/pdf/2105.05521.pdf) work publicly available
in their GitHub [repository](https://github.com/Leedeng/SauvolaNet#datasets).
It achieves the following results on the evaluation set on DIBCO metrics:
- loss: 0.0743
- DRD: 5.9548
- F-measure: 0.9840
- pseudo F-measure: 0.9740
- PSNR: 16.0119
with PSNR the peak signal-to-noise ratio and DRD the distance reciprocal distortion.
For more information on the above DIBCO metrics, see the 2017 introductory [paper](https://ieeexplore.ieee.org/document/8270159).
## Model description
This model is part of on-going research on pure semantic segmentation models as a formulation of document image binarization (DIBCO).
This is in contrast to the late trend of adapting classical binarization algorithms with neural networks,
such as [DeepOtsu](https://arxiv.org/abs/1901.06081) or [SauvolaNet](https://arxiv.org/pdf/2105.05521.pdf)
as extensions of Otsu's method and Sauvola thresholding algorithm, respectively.
## Intended uses & limitations
TBC
## Training and evaluation data
TBC
## Training procedure
### Training hyperparameters
TBC
### Training results
| training loss | epoch | step | validation loss | DRD | F-measure | pseudo F-measure | PSNR |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:----------------:|:-------:|
| 0.6983 | 0.26 | 10 | 0.7079 | 199.5096 | 0.5945 | 0.5801 | 3.4552 |
| 0.6657 | 0.52 | 20 | 0.6755 | 149.2346 | 0.7006 | 0.6165 | 4.6752 |
| 0.6145 | 0.77 | 30 | 0.6433 | 109.7298 | 0.7831 | 0.6520 | 5.5489 |
| 0.5553 | 1.03 | 40 | 0.5443 | 53.7149 | 0.8952 | 0.8000 | 8.1736 |
| 0.4627 | 1.29 | 50 | 0.4896 | 32.7649 | 0.9321 | 0.8603 | 9.8706 |
| 0.3969 | 1.55 | 60 | 0.4327 | 21.5508 | 0.9526 | 0.8985 | 11.3400 |
| 0.3414 | 1.81 | 70 | 0.3002 | 11.0094 | 0.9732 | 0.9462 | 13.5901 |
| 0.2898 | 2.06 | 80 | 0.2839 | 10.1064 | 0.9748 | 0.9563 | 13.9796 |
| 0.2292 | 2.32 | 90 | 0.2427 | 9.4437 | 0.9761 | 0.9584 | 14.2161 |
| 0.2153 | 2.58 | 100 | 0.2095 | 8.8696 | 0.9771 | 0.9621 | 14.4319 |
| 0.1767 | 2.84 | 110 | 0.1916 | 8.6152 | 0.9776 | 0.9646 | 14.5528 |
| 0.1509 | 3.1 | 120 | 0.1704 | 8.0761 | 0.9791 | 0.9632 | 14.7961 |
| 0.1265 | 3.35 | 130 | 0.1561 | 8.5627 | 0.9784 | 0.9655 | 14.7400 |
| 0.132 | 3.61 | 140 | 0.1318 | 8.1849 | 0.9788 | 0.9670 | 14.8469 |
| 0.1115 | 3.87 | 150 | 0.1317 | 7.8438 | 0.9790 | 0.9657 | 14.9072 |
| 0.0983 | 4.13 | 160 | 0.1273 | 7.9405 | 0.9791 | 0.9673 | 14.9701 |
| 0.1001 | 4.39 | 170 | 0.1234 | 8.4132 | 0.9788 | 0.9691 | 14.8573 |
| 0.0862 | 4.65 | 180 | 0.1147 | 8.0838 | 0.9797 | 0.9678 | 15.0433 |
| 0.0713 | 4.9 | 190 | 0.1134 | 7.6027 | 0.9806 | 0.9687 | 15.2235 |
| 0.0905 | 5.16 | 200 | 0.1061 | 7.2973 | 0.9803 | 0.9699 | 15.1646 |
| 0.0902 | 5.42 | 210 | 0.1061 | 8.4049 | 0.9787 | 0.9699 | 14.8460 |
| 0.0759 | 5.68 | 220 | 0.1062 | 7.7147 | 0.9809 | 0.9695 | 15.2426 |
| 0.0638 | 5.94 | 230 | 0.1019 | 7.7449 | 0.9806 | 0.9695 | 15.2195 |
| 0.0852 | 6.19 | 240 | 0.0962 | 7.0221 | 0.9817 | 0.9693 | 15.4730 |
| 0.0677 | 6.45 | 250 | 0.0961 | 7.2520 | 0.9814 | 0.9710 | 15.3878 |
| 0.0668 | 6.71 | 260 | 0.0972 | 6.6658 | 0.9823 | 0.9689 | 15.6106 |
| 0.0701 | 6.97 | 270 | 0.0909 | 6.9454 | 0.9820 | 0.9713 | 15.5458 |
| 0.0567 | 7.23 | 280 | 0.0925 | 6.5498 | 0.9824 | 0.9718 | 15.5965 |
| 0.0624 | 7.48 | 290 | 0.0899 | 7.3125 | 0.9813 | 0.9717 | 15.3255 |
| 0.0649 | 7.74 | 300 | 0.0932 | 7.4915 | 0.9816 | 0.9684 | 15.5666 |
| 0.0524 | 8.0 | 310 | 0.0905 | 7.1666 | 0.9815 | 0.9711 | 15.4526 |
| 0.0693 | 8.26 | 320 | 0.0901 | 6.5627 | 0.9827 | 0.9704 | 15.7335 |
| 0.0528 | 8.52 | 330 | 0.0845 | 6.6690 | 0.9826 | 0.9734 | 15.5950 |
| 0.0632 | 8.77 | 340 | 0.0822 | 6.2661 | 0.9833 | 0.9723 | 15.8631 |
| 0.0522 | 9.03 | 350 | 0.0844 | 6.0073 | 0.9836 | 0.9715 | 15.9393 |
| 0.0568 | 9.29 | 360 | 0.0817 | 5.9460 | 0.9837 | 0.9721 | 15.9523 |
| 0.057 | 9.55 | 370 | 0.0900 | 7.9726 | 0.9812 | 0.9730 | 15.1229 |
| 0.052 | 9.81 | 380 | 0.0836 | 6.5444 | 0.9822 | 0.9712 | 15.6388 |
| 0.0568 | 10.06 | 390 | 0.0810 | 6.0359 | 0.9836 | 0.9714 | 15.9796 |
| 0.0481 | 10.32 | 400 | 0.0784 | 6.2110 | 0.9835 | 0.9724 | 15.9235 |
| 0.0513 | 10.58 | 410 | 0.0803 | 6.0990 | 0.9835 | 0.9715 | 15.9502 |
| 0.0595 | 10.84 | 420 | 0.0798 | 6.0829 | 0.9835 | 0.9720 | 15.9052 |
| 0.047 | 11.1 | 430 | 0.0779 | 5.8847 | 0.9838 | 0.9725 | 16.0043 |
| 0.0406 | 11.35 | 440 | 0.0802 | 5.7944 | 0.9838 | 0.9713 | 16.0620 |
| 0.0493 | 11.61 | 450 | 0.0781 | 6.0947 | 0.9836 | 0.9731 | 15.9033 |
| 0.064 | 11.87 | 460 | 0.0769 | 6.1257 | 0.9837 | 0.9736 | 15.9080 |
| 0.0622 | 12.13 | 470 | 0.0765 | 6.2964 | 0.9835 | 0.9739 | 15.8188 |
| 0.0457 | 12.39 | 480 | 0.0773 | 5.9826 | 0.9838 | 0.9728 | 16.0119 |
| 0.0447 | 12.65 | 490 | 0.0761 | 5.7977 | 0.9841 | 0.9728 | 16.0900 |
| 0.0515 | 12.9 | 500 | 0.0750 | 5.8569 | 0.9840 | 0.9729 | 16.0633 |
| 0.0357 | 13.16 | 510 | 0.0796 | 5.7990 | 0.9837 | 0.9713 | 16.0818 |
| 0.0503 | 13.42 | 520 | 0.0749 | 5.8323 | 0.9841 | 0.9736 | 16.0510 |
| 0.0508 | 13.68 | 530 | 0.0746 | 6.0361 | 0.9839 | 0.9735 | 15.9709 |
| 0.0533 | 13.94 | 540 | 0.0768 | 6.1596 | 0.9836 | 0.9740 | 15.9193 |
| 0.0503 | 14.19 | 550 | 0.0739 | 5.5900 | 0.9843 | 0.9723 | 16.1883 |
| 0.0515 | 14.45 | 560 | 0.0740 | 5.4660 | 0.9845 | 0.9727 | 16.2745 |
| 0.0502 | 14.71 | 570 | 0.0740 | 5.5895 | 0.9844 | 0.9736 | 16.2054 |
| 0.0401 | 14.97 | 580 | 0.0741 | 5.9694 | 0.9840 | 0.9747 | 15.9603 |
| 0.0495 | 15.23 | 590 | 0.0745 | 5.9136 | 0.9841 | 0.9740 | 16.0458 |
| 0.0413 | 15.48 | 600 | 0.0743 | 5.9548 | 0.9840 | 0.9740 | 16.0119 |
### Framework versions
- transformers 4.31.0
- torch 2.0.0
- datasets 2.13.1
- tokenizers 0.13.3
|
lianlian123/ppo-LunarLander-v2 | lianlian123 | 2023-07-20T07:50:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-20T07:50:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.36 +/- 13.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
seonglae/llama-2-7b-chat-hf-gptq | seonglae | 2023-07-20T07:47:51Z | 12 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"llama-2",
"llama2",
"gptq",
"auto-gptq",
"7b",
"4bit",
"quantization",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-07-19T07:16:52Z | ---
inference: false
license: other
tags:
- llama-2
- llama2
- gptq
- auto-gptq
- 7b
- llama
- 4bit
- quantization
---
# Get Started
This model should use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) so you need to use `auto-gptq`
- `no-act-order` model
- 4bit model quantization
```py
from transformers import AutoTokenizer, pipeline, LlamaForCausalLM, LlamaTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_id = 'seonglae/llama-2-7b-chat-hf-gptq'
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
model_basename=model_basename,
trust_remote_code=True,
device='cuda:0',
use_triton=False,
use_safetensors=True,
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
temperature=0.5,
top_p=0.95,
max_new_tokens=100,
repetition_penalty=1.15,
)
prompt = "USER: Are you AI?\nASSISTANT:"
pipe(prompt)
``` |
J3/speecht5_finetuned_voxpopuli_it | J3 | 2023-07-20T07:46:18Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2023-07-19T10:00:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6707 | 1.0 | 108 | 0.5946 |
| 0.6625 | 2.0 | 217 | 0.6029 |
| 0.708 | 3.0 | 325 | 0.6118 |
| 0.6588 | 4.0 | 434 | 0.7109 |
| 0.6614 | 5.0 | 542 | 0.5799 |
| 0.6375 | 6.0 | 651 | 0.5714 |
| 0.619 | 7.0 | 759 | 0.5699 |
| 0.5806 | 8.0 | 868 | 0.5538 |
| 0.6024 | 9.0 | 976 | 0.5856 |
| 0.5728 | 10.0 | 1085 | 0.5446 |
| 0.5624 | 11.0 | 1193 | 0.5508 |
| 0.5711 | 12.0 | 1302 | 0.5376 |
| 0.5438 | 13.0 | 1410 | 0.5300 |
| 0.5308 | 14.0 | 1519 | 0.5206 |
| 0.5536 | 15.0 | 1627 | 0.5359 |
| 0.5285 | 16.0 | 1736 | 0.5264 |
| 0.525 | 17.0 | 1844 | 0.5108 |
| 0.4961 | 18.0 | 1953 | 0.5116 |
| 0.5111 | 19.0 | 2061 | 0.5042 |
| 0.4869 | 20.0 | 2170 | 0.5050 |
| 0.4864 | 21.0 | 2278 | 0.4994 |
| 0.4794 | 22.0 | 2387 | 0.5039 |
| 0.4787 | 23.0 | 2495 | 0.4975 |
| 0.4692 | 24.0 | 2604 | 0.4961 |
| 0.4656 | 24.88 | 2700 | 0.4968 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3 |
loganamcnichols/autotrain-deberta_alpha3e4_epoch4_replic | loganamcnichols | 2023-07-20T07:44:54Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"unk",
"dataset:loganamcnichols/autotrain-data-deberta_alpha3e4_epoch4_replic",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-20T07:43:50Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- loganamcnichols/autotrain-data-deberta_alpha3e4_epoch4_replic
co2_eq_emissions:
emissions: 0.6483603321935798
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
- CO2 Emissions (in grams): 0.6484
## Validation Metrics
eval_loss: 2.443734884262085
eval_mse: 2.443734884262085
eval_runtime: 0.5298
eval_samples_per_second: 90.593
eval_steps_per_second: 1.887
epoch: 4.0
|
EhsanElahi/pokemon-lora | EhsanElahi | 2023-07-20T07:44:45Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-20T06:43:23Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - EhsanElahi/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
VFiona/opus-mt-it-en-finetuned_5000-it-to-en | VFiona | 2023-07-20T07:42:25Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-19T22:30:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-it-en-finetuned_5000-it-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-it-en-finetuned_5000-it-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-it-en](https://huggingface.co/Helsinki-NLP/opus-mt-it-en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 282 | 0.5054 | 71.2415 | 22.26 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
The13thDrifter/Cayde-6 | The13thDrifter | 2023-07-20T07:38:23Z | 0 | 0 | null | [
"en",
"license:cc-by-3.0",
"region:us"
] | null | 2023-07-19T23:56:31Z | ---
license: cc-by-3.0
language:
- en
--- |
yancongwen/chatglm2-6b-pt-16-1e-2-20230720-2 | yancongwen | 2023-07-20T07:37:45Z | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | 2023-07-20T07:35:33Z | # ChatGLM2-6B 微调模型
参考:[ChatGLM2-6B-PT](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning)
## 参数
```sh
PRE_SEQ_LEN=16
LR=1e-2
NUM_GPUS=1
torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_GPUS main.py \
--do_train \
--train_file train_data/train_100k.json \
--validation_file train_data/dev_1k.json \
--preprocessing_num_workers 10 \
--prompt_column question \
--response_column answer \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/chatglm2-6b-pt-$PRE_SEQ_LEN-$LR-20230720-2 \
--overwrite_output_dir \
--max_source_length 256 \
--max_target_length 128 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--predict_with_generate \
--max_steps 1000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN \
--quantization_bit 4
```
## train metrics
```
epoch = 0.2
train_loss = 0.1803
train_runtime = 1:44:48.92
train_samples = 78577
train_samples_per_second = 2.544
train_steps_per_second = 0.159
```
---
license: unlicense
--- |
albagon/ppo-Huggy | albagon | 2023-07-20T07:34:16Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-20T07:34:12Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: albagon/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Trong-Nghia/xlnet-large-cased-detect-dep-v4 | Trong-Nghia | 2023-07-20T07:27:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-large-cased",
"base_model:finetune:xlnet/xlnet-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-19T16:01:35Z | ---
license: mit
base_model: xlnet-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlnet-large-cased-detect-dep-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-large-cased-detect-dep-v4
This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5693
- Accuracy: 0.733
- F1: 0.8089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6433 | 1.0 | 751 | 0.5590 | 0.718 | 0.8082 |
| 0.603 | 2.0 | 1502 | 0.5566 | 0.746 | 0.8204 |
| 0.5791 | 3.0 | 2253 | 0.5693 | 0.733 | 0.8089 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
helojo/my_awesome_eli5_clm-model | helojo | 2023-07-20T06:51:42Z | 218 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-20T04:09:53Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9025 | 1.0 | 1134 | 3.7812 |
| 3.8144 | 2.0 | 2268 | 3.7806 |
| 3.8042 | 3.0 | 3402 | 3.7848 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nishanthk10/falcon-7b-instruct-ft-adapters | nishanthk10 | 2023-07-20T06:50:08Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-19T10:11:05Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
nolanaatama/hnthyg | nolanaatama | 2023-07-20T06:49:30Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-20T06:41:11Z | ---
license: creativeml-openrail-m
---
|
nebulae7/one | nebulae7 | 2023-07-20T06:46:29Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T06:06:34Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
nolanaatama/htrgtbcchthrckhjck | nolanaatama | 2023-07-20T06:45:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-19T09:42:42Z | ---
license: creativeml-openrail-m
---
|
Roy029/sno_extend_2500 | Roy029 | 2023-07-20T06:00:39Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-19T10:54:10Z | PACLIC
拡張したモデルとトークナイザ |
tuan2930/AnhTuan | tuan2930 | 2023-07-20T05:50:51Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-20T05:50:51Z | ---
license: bigscience-bloom-rail-1.0
---
|
gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz | gokuls | 2023-07-20T05:36:43Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-17T17:20:50Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_v1_complete_training_wt_init_48_mini_emb_comp_frz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_v1_complete_training_wt_init_48_mini_emb_comp_frz
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7952
- Accuracy: 0.1570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-------:|:---------------:|:--------:|
| 6.2393 | 0.25 | 30000 | 6.2308 | 0.1422 |
| 6.1905 | 0.49 | 60000 | 6.1865 | 0.1446 |
| 6.1603 | 0.74 | 90000 | 6.1535 | 0.1467 |
| 6.1282 | 0.98 | 120000 | 6.1308 | 0.1473 |
| 6.1155 | 1.23 | 150000 | 6.1108 | 0.1485 |
| 6.1032 | 1.47 | 180000 | 6.0956 | 0.1491 |
| 6.0866 | 1.72 | 210000 | 6.0824 | 0.1495 |
| 6.074 | 1.97 | 240000 | 6.0709 | 0.1497 |
| 6.0586 | 2.21 | 270000 | 6.0606 | 0.1500 |
| 6.0451 | 2.46 | 300000 | 6.0479 | 0.1506 |
| 6.0401 | 2.7 | 330000 | 6.0385 | 0.1507 |
| 6.027 | 2.95 | 360000 | 6.0274 | 0.1512 |
| 6.0198 | 3.2 | 390000 | 6.0148 | 0.1512 |
| 6.0023 | 3.44 | 420000 | 5.9970 | 0.1514 |
| 5.9882 | 3.69 | 450000 | 5.9782 | 0.1522 |
| 5.9756 | 3.93 | 480000 | 5.9632 | 0.1521 |
| 5.9587 | 4.18 | 510000 | 5.9471 | 0.1525 |
| 5.9449 | 4.42 | 540000 | 5.9315 | 0.1527 |
| 5.9212 | 4.67 | 570000 | 5.9157 | 0.1535 |
| 5.9201 | 4.92 | 600000 | 5.9062 | 0.1536 |
| 5.9125 | 5.16 | 630000 | 5.8994 | 0.1539 |
| 5.8982 | 5.41 | 660000 | 5.8930 | 0.1541 |
| 5.8933 | 5.65 | 690000 | 5.8847 | 0.1543 |
| 5.8844 | 5.9 | 720000 | 5.8792 | 0.1542 |
| 5.8848 | 6.14 | 750000 | 5.8728 | 0.1543 |
| 5.8787 | 6.39 | 780000 | 5.8678 | 0.1547 |
| 5.8748 | 6.64 | 810000 | 5.8629 | 0.1546 |
| 5.8665 | 6.88 | 840000 | 5.8576 | 0.1549 |
| 5.8637 | 7.13 | 870000 | 5.8513 | 0.1552 |
| 5.8553 | 7.37 | 900000 | 5.8465 | 0.1555 |
| 5.8539 | 7.62 | 930000 | 5.8423 | 0.1554 |
| 5.8479 | 7.87 | 960000 | 5.8378 | 0.1556 |
| 5.8446 | 8.11 | 990000 | 5.8329 | 0.1557 |
| 5.8411 | 8.36 | 1020000 | 5.8283 | 0.1559 |
| 5.8316 | 8.6 | 1050000 | 5.8240 | 0.1561 |
| 5.8254 | 8.85 | 1080000 | 5.8219 | 0.1559 |
| 5.8268 | 9.09 | 1110000 | 5.8158 | 0.1560 |
| 5.8257 | 9.34 | 1140000 | 5.8125 | 0.1562 |
| 5.8205 | 9.59 | 1170000 | 5.8076 | 0.1565 |
| 5.811 | 9.83 | 1200000 | 5.8025 | 0.1566 |
| 5.8123 | 10.08 | 1230000 | 5.7997 | 0.1567 |
| 5.8125 | 10.32 | 1260000 | 5.7952 | 0.1570 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Pattriarch/test | Pattriarch | 2023-07-20T05:19:43Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-20T05:16:51Z | # Это тестовый репозиторий |
w11wo/sundanese-roberta-base | w11wo | 2023-07-20T05:16:57Z | 84 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"sundanese-roberta-base",
"su",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar",
"dataset:wikipedia",
"arxiv:1907.11692",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: su
tags:
- sundanese-roberta-base
license: mit
datasets:
- mc4
- cc100
- oscar
- wikipedia
widget:
- text: "Budi nuju <mask> di sakola."
---
## Sundanese RoBERTa Base
Sundanese RoBERTa Base is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on four datasets: [OSCAR](https://hf.co/datasets/oscar)'s `unshuffled_deduplicated_su` subset, the Sundanese [mC4](https://hf.co/datasets/mc4) subset, the Sundanese [CC100](https://hf.co/datasets/cc100) subset, and Sundanese [Wikipedia](https://su.wikipedia.org/).
10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 1.952 and an evaluation accuracy of 63.98%.
This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/w11wo/sundanese-roberta-base/tree/main) tab, as well as the [Training metrics](https://hf.co/w11wo/sundanese-roberta-base/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------ | ------- | ------- | ------------------------------------- |
| `sundanese-roberta-base` | 124M | RoBERTa | OSCAR, mC4, CC100, Wikipedia (758 MB) |
## Evaluation Results
The model was trained for 50 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid accuracy | total time |
| ---------- | ---------- | -------------- | ---------- |
| 1.965 | 1.952 | 0.6398 | 6:24:51 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/sundanese-roberta-base"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi nuju <mask> di sakola.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/sundanese-roberta-base"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi nuju diajar di sakola."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from all four datasets that may be carried over into the results of this model.
## Author
Sundanese RoBERTa Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/).
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
emaeon/shuffle-lora-large-healthcare-model-epoch-1 | emaeon | 2023-07-20T04:50:40Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-19T17:42:16Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
emaeon/shuffle-lora-large-healthcare-model-epoch-0 | emaeon | 2023-07-20T04:46:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-19T17:37:55Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
sd-concepts-library/taiwanbeer | sd-concepts-library | 2023-07-20T04:32:25Z | 0 | 0 | null | [
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | null | 2023-07-20T04:32:16Z | ---
license: mit
base_model: runwayml/stable-diffusion-v1-5
---
### taiwanbeer on Stable Diffusion
This is the `<taiwan_beer_object>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:









|
rifkiaputri/indobert-base-id-finetune-idk-mrc | rifkiaputri | 2023-07-20T04:23:18Z | 138 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"machine-reading-comprehension",
"extractive-qa",
"id",
"dataset:rifkiaputri/idk-mrc",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-14T04:47:05Z |
---
language:
- id
tags:
- machine-reading-comprehension
- question-answering
- extractive-qa
datasets:
- rifkiaputri/idk-mrc
---
# IndoBERT for Indonesian MRC (uncased)
[IndoBERT](https://huggingface.co/indobenchmark/indobert-base-p2) model fine-tuned on [IDK-MRC dataset](https://huggingface.co/datasets/rifkiaputri/idk-mrc) for answering extractive questions in Indonesian. Please refer to [this paper](https://aclanthology.org/2022.emnlp-main.465/) for more details on the model.
## Citation Info
```bibtex
@inproceedings{putri-oh-2022-idk,
title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
author = "Putri, Rifki Afina and
Oh, Alice",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.465",
pages = "6918--6933",
}
``` |
rifkiaputri/mbert-base-id-finetune-idk-mrc | rifkiaputri | 2023-07-20T04:21:19Z | 113 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"machine-reading-comprehension",
"extractive-qa",
"id",
"dataset:rifkiaputri/idk-mrc",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-14T05:02:46Z | ---
language:
- id
tags:
- machine-reading-comprehension
- question-answering
- extractive-qa
datasets:
- rifkiaputri/idk-mrc
---
# m-BERT Indonesian MRC (cased)
[m-BERT](https://huggingface.co/bert-base-multilingual-cased) model fine-tuned on [IDK-MRC dataset](https://huggingface.co/datasets/rifkiaputri/idk-mrc) for answering extractive questions in Indonesian. Please refer to [this paper](https://aclanthology.org/2022.emnlp-main.465/) for more details on the model.
## Citation Info
```bibtex
@inproceedings{putri-oh-2022-idk,
title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
author = "Putri, Rifki Afina and
Oh, Alice",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.465",
pages = "6918--6933",
}
``` |
eschorn/3_loa | eschorn | 2023-07-20T03:54:40Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:billsum",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"region:us"
] | null | 2023-07-19T20:46:54Z | ---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: 3_loa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3_loa
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4825
- Rouge1: 0.201
- Rouge2: 0.1132
- Rougel: 0.1753
- Rougelsum: 0.1755
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.1079 | 1.0 | 989 | 1.6673 | 0.2028 | 0.1092 | 0.1748 | 0.1751 | 19.0 |
| 1.8481 | 2.0 | 1978 | 1.6150 | 0.1979 | 0.1061 | 0.1715 | 0.1717 | 19.0 |
| 1.7889 | 3.0 | 2967 | 1.5833 | 0.1994 | 0.11 | 0.1727 | 0.1727 | 19.0 |
| 1.7319 | 4.0 | 3956 | 1.5584 | 0.1978 | 0.1084 | 0.1718 | 0.1718 | 19.0 |
| 1.7279 | 5.0 | 4945 | 1.5440 | 0.2016 | 0.1106 | 0.1755 | 0.1756 | 19.0 |
| 1.7386 | 6.0 | 5934 | 1.5326 | 0.1991 | 0.1086 | 0.1734 | 0.1736 | 19.0 |
| 1.6972 | 7.0 | 6923 | 1.5251 | 0.2013 | 0.1122 | 0.1759 | 0.176 | 19.0 |
| 1.6732 | 8.0 | 7912 | 1.5145 | 0.2024 | 0.1123 | 0.1766 | 0.1766 | 19.0 |
| 1.6597 | 9.0 | 8901 | 1.5079 | 0.2019 | 0.1125 | 0.1751 | 0.1753 | 19.0 |
| 1.6151 | 10.0 | 9890 | 1.5045 | 0.201 | 0.1123 | 0.1758 | 0.1761 | 19.0 |
| 1.6381 | 11.0 | 10879 | 1.4997 | 0.2009 | 0.1116 | 0.1755 | 0.1756 | 19.0 |
| 1.6148 | 12.0 | 11868 | 1.4974 | 0.2018 | 0.1133 | 0.1763 | 0.1765 | 19.0 |
| 1.6196 | 13.0 | 12857 | 1.4940 | 0.2014 | 0.1129 | 0.1756 | 0.1756 | 19.0 |
| 1.6137 | 14.0 | 13846 | 1.4914 | 0.2025 | 0.1136 | 0.1766 | 0.1768 | 19.0 |
| 1.6313 | 15.0 | 14835 | 1.4873 | 0.2032 | 0.114 | 0.1769 | 0.1771 | 19.0 |
| 1.6098 | 16.0 | 15824 | 1.4847 | 0.2012 | 0.1133 | 0.175 | 0.1754 | 19.0 |
| 1.6061 | 17.0 | 16813 | 1.4845 | 0.2019 | 0.1138 | 0.1752 | 0.1755 | 19.0 |
| 1.5918 | 18.0 | 17802 | 1.4833 | 0.2011 | 0.1129 | 0.1747 | 0.175 | 19.0 |
| 1.5842 | 19.0 | 18791 | 1.4824 | 0.2013 | 0.1133 | 0.1753 | 0.1755 | 19.0 |
| 1.5964 | 20.0 | 19780 | 1.4825 | 0.201 | 0.1132 | 0.1753 | 0.1755 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1.post200
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Yntec/DucHaitenAnime768 | Yntec | 2023-07-20T03:44:02Z | 250 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"DucHaiten",
"Anime",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-19T23:09:45Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- DucHaiten
- Anime
---
# DucHaiten Anime
768 version of the fp16 no ema version of this model with the Waifu 1.4 VAE baked in for the inference API.
If you like his content, support him at: https://linktr.ee/Duc_Haiten
https://www.patreon.com/duchaitenreal
Original page:
https://civitai.com/models/6634 |
YarramsettiNaresh/q-Taxi-v3 | YarramsettiNaresh | 2023-07-20T03:26:35Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-20T02:11:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YarramsettiNaresh/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jordyvl/171-tiny_tobacco3482_kd_NKD_t1.0_g1.5 | jordyvl | 2023-07-20T03:02:58Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-19T11:02:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 171-tiny_tobacco3482_kd_NKD_t1.0_g1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 171-tiny_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3051
- Accuracy: 0.815
- Brier Loss: 0.3074
- Nll: 1.7785
- F1 Micro: 0.815
- F1 Macro: 0.8049
- Ece: 0.1516
- Aurc: 0.0489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 5.0451 | 0.22 | 0.8885 | 8.4566 | 0.22 | 0.1247 | 0.2851 | 0.7928 |
| No log | 2.0 | 26 | 4.5055 | 0.385 | 0.7765 | 3.9655 | 0.3850 | 0.3066 | 0.3049 | 0.4178 |
| No log | 3.0 | 39 | 4.3096 | 0.52 | 0.6691 | 3.8146 | 0.52 | 0.3950 | 0.3177 | 0.2860 |
| No log | 4.0 | 52 | 4.1755 | 0.575 | 0.5907 | 2.9444 | 0.575 | 0.4546 | 0.2729 | 0.2029 |
| No log | 5.0 | 65 | 4.0437 | 0.675 | 0.5104 | 2.4241 | 0.675 | 0.5995 | 0.2991 | 0.1354 |
| No log | 6.0 | 78 | 4.0642 | 0.69 | 0.4602 | 2.3471 | 0.69 | 0.5925 | 0.2798 | 0.1256 |
| No log | 7.0 | 91 | 4.0104 | 0.695 | 0.4319 | 2.2902 | 0.695 | 0.6109 | 0.2430 | 0.1101 |
| No log | 8.0 | 104 | 4.1702 | 0.7 | 0.4296 | 2.5778 | 0.7 | 0.6065 | 0.2231 | 0.1201 |
| No log | 9.0 | 117 | 4.2785 | 0.695 | 0.4433 | 2.7331 | 0.695 | 0.6269 | 0.2296 | 0.1283 |
| No log | 10.0 | 130 | 3.9853 | 0.725 | 0.3705 | 2.0880 | 0.7250 | 0.6477 | 0.1971 | 0.0874 |
| No log | 11.0 | 143 | 3.9595 | 0.725 | 0.3506 | 2.1144 | 0.7250 | 0.6431 | 0.1650 | 0.0750 |
| No log | 12.0 | 156 | 3.8678 | 0.735 | 0.3504 | 2.0683 | 0.735 | 0.6839 | 0.2047 | 0.0764 |
| No log | 13.0 | 169 | 3.9641 | 0.745 | 0.3520 | 2.0788 | 0.745 | 0.6754 | 0.1899 | 0.0837 |
| No log | 14.0 | 182 | 4.0188 | 0.725 | 0.3639 | 2.3771 | 0.7250 | 0.6643 | 0.1740 | 0.0893 |
| No log | 15.0 | 195 | 3.8558 | 0.765 | 0.3342 | 1.4620 | 0.765 | 0.7097 | 0.1866 | 0.0696 |
| No log | 16.0 | 208 | 3.9103 | 0.79 | 0.3416 | 1.7139 | 0.79 | 0.7662 | 0.2043 | 0.0770 |
| No log | 17.0 | 221 | 4.0320 | 0.795 | 0.3548 | 1.8525 | 0.795 | 0.7690 | 0.1901 | 0.0924 |
| No log | 18.0 | 234 | 3.8974 | 0.79 | 0.3264 | 1.8646 | 0.79 | 0.7582 | 0.1656 | 0.0739 |
| No log | 19.0 | 247 | 3.8235 | 0.815 | 0.3074 | 1.4771 | 0.815 | 0.8185 | 0.1825 | 0.0617 |
| No log | 20.0 | 260 | 3.8918 | 0.805 | 0.3150 | 1.6824 | 0.805 | 0.7893 | 0.1859 | 0.0631 |
| No log | 21.0 | 273 | 3.8919 | 0.785 | 0.3161 | 1.7951 | 0.785 | 0.7725 | 0.1450 | 0.0701 |
| No log | 22.0 | 286 | 3.8626 | 0.795 | 0.3121 | 1.6707 | 0.795 | 0.7832 | 0.1570 | 0.0684 |
| No log | 23.0 | 299 | 3.8132 | 0.825 | 0.2906 | 1.4511 | 0.825 | 0.8097 | 0.1552 | 0.0564 |
| No log | 24.0 | 312 | 3.8680 | 0.81 | 0.3048 | 1.9348 | 0.81 | 0.8027 | 0.1572 | 0.0611 |
| No log | 25.0 | 325 | 3.8305 | 0.81 | 0.2954 | 1.5734 | 0.81 | 0.7999 | 0.1645 | 0.0556 |
| No log | 26.0 | 338 | 3.8050 | 0.81 | 0.2965 | 1.7904 | 0.81 | 0.8013 | 0.1495 | 0.0546 |
| No log | 27.0 | 351 | 3.9524 | 0.79 | 0.3212 | 2.0459 | 0.79 | 0.7846 | 0.1643 | 0.0669 |
| No log | 28.0 | 364 | 3.9299 | 0.81 | 0.3076 | 1.7819 | 0.81 | 0.7967 | 0.1393 | 0.0601 |
| No log | 29.0 | 377 | 3.9315 | 0.805 | 0.3158 | 2.0697 | 0.805 | 0.8046 | 0.1618 | 0.0663 |
| No log | 30.0 | 390 | 3.8141 | 0.825 | 0.2853 | 1.9079 | 0.825 | 0.8150 | 0.1487 | 0.0528 |
| No log | 31.0 | 403 | 3.8682 | 0.815 | 0.2932 | 1.9092 | 0.815 | 0.8030 | 0.1448 | 0.0585 |
| No log | 32.0 | 416 | 3.8275 | 0.82 | 0.2823 | 1.6793 | 0.82 | 0.8043 | 0.1459 | 0.0508 |
| No log | 33.0 | 429 | 3.8782 | 0.82 | 0.2895 | 1.6565 | 0.82 | 0.8077 | 0.1465 | 0.0542 |
| No log | 34.0 | 442 | 3.8433 | 0.825 | 0.2891 | 1.6481 | 0.825 | 0.8157 | 0.1467 | 0.0525 |
| No log | 35.0 | 455 | 3.8403 | 0.82 | 0.2891 | 1.5960 | 0.82 | 0.8090 | 0.1398 | 0.0497 |
| No log | 36.0 | 468 | 3.8627 | 0.81 | 0.2848 | 1.6935 | 0.81 | 0.8015 | 0.1557 | 0.0471 |
| No log | 37.0 | 481 | 3.8992 | 0.81 | 0.2937 | 1.8237 | 0.81 | 0.7991 | 0.1511 | 0.0515 |
| No log | 38.0 | 494 | 3.9662 | 0.82 | 0.2978 | 1.8392 | 0.82 | 0.8143 | 0.1503 | 0.0527 |
| 3.5354 | 39.0 | 507 | 3.9440 | 0.825 | 0.2899 | 1.7818 | 0.825 | 0.8159 | 0.1454 | 0.0540 |
| 3.5354 | 40.0 | 520 | 3.9479 | 0.81 | 0.2959 | 1.7465 | 0.81 | 0.7986 | 0.1504 | 0.0501 |
| 3.5354 | 41.0 | 533 | 3.9760 | 0.815 | 0.2964 | 1.7821 | 0.815 | 0.8049 | 0.1519 | 0.0522 |
| 3.5354 | 42.0 | 546 | 3.9696 | 0.82 | 0.2906 | 1.7671 | 0.82 | 0.8127 | 0.1468 | 0.0503 |
| 3.5354 | 43.0 | 559 | 4.0107 | 0.81 | 0.2994 | 1.8207 | 0.81 | 0.7986 | 0.1474 | 0.0517 |
| 3.5354 | 44.0 | 572 | 3.9970 | 0.815 | 0.2913 | 1.7706 | 0.815 | 0.8049 | 0.1465 | 0.0504 |
| 3.5354 | 45.0 | 585 | 3.9890 | 0.815 | 0.2886 | 1.6384 | 0.815 | 0.8049 | 0.1516 | 0.0495 |
| 3.5354 | 46.0 | 598 | 4.0585 | 0.82 | 0.3006 | 1.7773 | 0.82 | 0.8127 | 0.1522 | 0.0518 |
| 3.5354 | 47.0 | 611 | 4.0448 | 0.825 | 0.2925 | 1.8226 | 0.825 | 0.8109 | 0.1540 | 0.0505 |
| 3.5354 | 48.0 | 624 | 4.0918 | 0.815 | 0.3016 | 1.8403 | 0.815 | 0.8049 | 0.1492 | 0.0512 |
| 3.5354 | 49.0 | 637 | 4.0677 | 0.82 | 0.2971 | 1.8256 | 0.82 | 0.8127 | 0.1396 | 0.0493 |
| 3.5354 | 50.0 | 650 | 4.0831 | 0.815 | 0.2986 | 1.8232 | 0.815 | 0.8049 | 0.1479 | 0.0513 |
| 3.5354 | 51.0 | 663 | 4.0846 | 0.815 | 0.2994 | 1.8268 | 0.815 | 0.8049 | 0.1525 | 0.0496 |
| 3.5354 | 52.0 | 676 | 4.0828 | 0.82 | 0.2978 | 1.7538 | 0.82 | 0.8127 | 0.1425 | 0.0486 |
| 3.5354 | 53.0 | 689 | 4.0890 | 0.815 | 0.3004 | 1.7552 | 0.815 | 0.8049 | 0.1491 | 0.0485 |
| 3.5354 | 54.0 | 702 | 4.1299 | 0.815 | 0.3029 | 1.8902 | 0.815 | 0.8049 | 0.1614 | 0.0506 |
| 3.5354 | 55.0 | 715 | 4.1200 | 0.815 | 0.3016 | 1.8279 | 0.815 | 0.8049 | 0.1510 | 0.0499 |
| 3.5354 | 56.0 | 728 | 4.1196 | 0.815 | 0.3008 | 1.8883 | 0.815 | 0.8049 | 0.1503 | 0.0503 |
| 3.5354 | 57.0 | 741 | 4.1200 | 0.815 | 0.3003 | 1.7620 | 0.815 | 0.8049 | 0.1499 | 0.0490 |
| 3.5354 | 58.0 | 754 | 4.1419 | 0.815 | 0.3017 | 1.8463 | 0.815 | 0.8049 | 0.1459 | 0.0499 |
| 3.5354 | 59.0 | 767 | 4.1527 | 0.815 | 0.3041 | 1.8269 | 0.815 | 0.8049 | 0.1618 | 0.0496 |
| 3.5354 | 60.0 | 780 | 4.1362 | 0.815 | 0.3002 | 1.7666 | 0.815 | 0.8049 | 0.1461 | 0.0489 |
| 3.5354 | 61.0 | 793 | 4.1470 | 0.815 | 0.3009 | 1.8213 | 0.815 | 0.8049 | 0.1471 | 0.0491 |
| 3.5354 | 62.0 | 806 | 4.1503 | 0.815 | 0.2991 | 1.8235 | 0.815 | 0.8049 | 0.1604 | 0.0496 |
| 3.5354 | 63.0 | 819 | 4.1544 | 0.815 | 0.3003 | 1.7546 | 0.815 | 0.8049 | 0.1518 | 0.0487 |
| 3.5354 | 64.0 | 832 | 4.1713 | 0.815 | 0.3023 | 1.8223 | 0.815 | 0.8049 | 0.1543 | 0.0499 |
| 3.5354 | 65.0 | 845 | 4.1716 | 0.815 | 0.3010 | 1.8213 | 0.815 | 0.8049 | 0.1485 | 0.0494 |
| 3.5354 | 66.0 | 858 | 4.1956 | 0.815 | 0.3042 | 1.8287 | 0.815 | 0.8049 | 0.1637 | 0.0496 |
| 3.5354 | 67.0 | 871 | 4.1845 | 0.815 | 0.3018 | 1.8259 | 0.815 | 0.8049 | 0.1519 | 0.0488 |
| 3.5354 | 68.0 | 884 | 4.2055 | 0.815 | 0.3037 | 1.8339 | 0.815 | 0.8049 | 0.1504 | 0.0496 |
| 3.5354 | 69.0 | 897 | 4.2079 | 0.815 | 0.3039 | 1.8281 | 0.815 | 0.8049 | 0.1554 | 0.0491 |
| 3.5354 | 70.0 | 910 | 4.2125 | 0.815 | 0.3034 | 1.7637 | 0.815 | 0.8049 | 0.1500 | 0.0490 |
| 3.5354 | 71.0 | 923 | 4.2179 | 0.815 | 0.3035 | 1.8254 | 0.815 | 0.8049 | 0.1531 | 0.0492 |
| 3.5354 | 72.0 | 936 | 4.2270 | 0.815 | 0.3040 | 1.8270 | 0.815 | 0.8049 | 0.1528 | 0.0493 |
| 3.5354 | 73.0 | 949 | 4.2294 | 0.815 | 0.3041 | 1.8260 | 0.815 | 0.8049 | 0.1531 | 0.0488 |
| 3.5354 | 74.0 | 962 | 4.2383 | 0.815 | 0.3043 | 1.8261 | 0.815 | 0.8049 | 0.1513 | 0.0492 |
| 3.5354 | 75.0 | 975 | 4.2441 | 0.815 | 0.3051 | 1.7691 | 0.815 | 0.8049 | 0.1539 | 0.0488 |
| 3.5354 | 76.0 | 988 | 4.2500 | 0.815 | 0.3051 | 1.8287 | 0.815 | 0.8049 | 0.1540 | 0.0490 |
| 3.192 | 77.0 | 1001 | 4.2538 | 0.815 | 0.3053 | 1.8273 | 0.815 | 0.8049 | 0.1542 | 0.0490 |
| 3.192 | 78.0 | 1014 | 4.2573 | 0.815 | 0.3055 | 1.8281 | 0.815 | 0.8049 | 0.1541 | 0.0491 |
| 3.192 | 79.0 | 1027 | 4.2603 | 0.815 | 0.3054 | 1.8275 | 0.815 | 0.8049 | 0.1544 | 0.0490 |
| 3.192 | 80.0 | 1040 | 4.2673 | 0.815 | 0.3060 | 1.8277 | 0.815 | 0.8049 | 0.1544 | 0.0489 |
| 3.192 | 81.0 | 1053 | 4.2697 | 0.815 | 0.3060 | 1.8272 | 0.815 | 0.8049 | 0.1500 | 0.0489 |
| 3.192 | 82.0 | 1066 | 4.2747 | 0.815 | 0.3064 | 1.7765 | 0.815 | 0.8049 | 0.1544 | 0.0489 |
| 3.192 | 83.0 | 1079 | 4.2769 | 0.815 | 0.3063 | 1.8273 | 0.815 | 0.8049 | 0.1503 | 0.0489 |
| 3.192 | 84.0 | 1092 | 4.2824 | 0.815 | 0.3066 | 1.8278 | 0.815 | 0.8049 | 0.1548 | 0.0491 |
| 3.192 | 85.0 | 1105 | 4.2842 | 0.815 | 0.3066 | 1.8276 | 0.815 | 0.8049 | 0.1506 | 0.0489 |
| 3.192 | 86.0 | 1118 | 4.2883 | 0.815 | 0.3070 | 1.8281 | 0.815 | 0.8049 | 0.1508 | 0.0488 |
| 3.192 | 87.0 | 1131 | 4.2907 | 0.815 | 0.3071 | 1.7730 | 0.815 | 0.8049 | 0.1548 | 0.0489 |
| 3.192 | 88.0 | 1144 | 4.2919 | 0.815 | 0.3070 | 1.7739 | 0.815 | 0.8049 | 0.1513 | 0.0489 |
| 3.192 | 89.0 | 1157 | 4.2943 | 0.815 | 0.3071 | 1.8281 | 0.815 | 0.8049 | 0.1514 | 0.0489 |
| 3.192 | 90.0 | 1170 | 4.2954 | 0.815 | 0.3070 | 1.8280 | 0.815 | 0.8049 | 0.1508 | 0.0489 |
| 3.192 | 91.0 | 1183 | 4.2976 | 0.815 | 0.3071 | 1.8282 | 0.815 | 0.8049 | 0.1514 | 0.0489 |
| 3.192 | 92.0 | 1196 | 4.2985 | 0.815 | 0.3070 | 1.7799 | 0.815 | 0.8049 | 0.1509 | 0.0489 |
| 3.192 | 93.0 | 1209 | 4.3000 | 0.815 | 0.3072 | 1.7832 | 0.815 | 0.8049 | 0.1514 | 0.0489 |
| 3.192 | 94.0 | 1222 | 4.3016 | 0.815 | 0.3073 | 1.7775 | 0.815 | 0.8049 | 0.1516 | 0.0489 |
| 3.192 | 95.0 | 1235 | 4.3025 | 0.815 | 0.3072 | 1.8282 | 0.815 | 0.8049 | 0.1510 | 0.0489 |
| 3.192 | 96.0 | 1248 | 4.3030 | 0.815 | 0.3073 | 1.7778 | 0.815 | 0.8049 | 0.1510 | 0.0489 |
| 3.192 | 97.0 | 1261 | 4.3042 | 0.815 | 0.3073 | 1.7770 | 0.815 | 0.8049 | 0.1516 | 0.0489 |
| 3.192 | 98.0 | 1274 | 4.3047 | 0.815 | 0.3074 | 1.7826 | 0.815 | 0.8049 | 0.1516 | 0.0489 |
| 3.192 | 99.0 | 1287 | 4.3051 | 0.815 | 0.3074 | 1.7777 | 0.815 | 0.8049 | 0.1516 | 0.0489 |
| 3.192 | 100.0 | 1300 | 4.3051 | 0.815 | 0.3074 | 1.7785 | 0.815 | 0.8049 | 0.1516 | 0.0489 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
crumb/opentinystories-68m-base | crumb | 2023-07-20T02:44:23Z | 187 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:crumb/flan-ul2-tinystories",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-07T21:39:43Z | ---
license: mit
datasets:
- crumb/flan-ul2-tinystories
language:
- en
---
# Tinystories-30m-UL2
*GPT-4 generated model card*
## Model Details
- **Model Name**: [crumb/opentinystories-30m-base](https://huggingface.co/crumb/opentinystories-30m-base)
- **Model Type**: GPTNeoXForCausalLM
- **Model Training Details**: The model is trained using [crumb/flan-ul2-tinystories](https://huggingface.co/datasets/crumb/flan-ul2-tinystories) which contains around a quarter of a million examples generated from Flan-UL2 (20b) with the prompt "Write a short story using the vocabulary of a first-grader."
## Model Description
This model is trained with the specific purpose of generating short narratives using a vocabulary limited to the level of a first-grader. In terms of complexity and language usage, the model is designed to produce simplistic and easily comprehensible text.
Learning from text generated by Flan-UL2 (20b), the model adopts a simple storyline layout and a minimalistic vocabulary, which it recognizes are easier to learn and replicate.
## Training
The model is trained for four epochs on the [crumb/flan-ul2-tinystories](https://huggingface.co/datasets/crumb/flan-ul2-tinystories) dataset (inspired by [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)), created with the help of Flan-UL2 (20b), as opposed to GPT-3.5/4 in the original Tinystories. The data is designed to follow the format of a simple, first-grader-level narrative, which aids the model in learning simple vocabulary and sentence structure.
Training arguments:
```
per_device_train_batch_size=8,
gradient_accumulation_steps=16,
warmup_steps=128,
num_train_epochs=4,
learning_rate=2e-4,
eval_steps=64,
optim="adamw_torch",
```
## Usage
This model serves as a meaningful research tool in exploring the learning tendencies of smaller language models and their ability to grasp simplified language constructs. Its specific training set effectively maps the idea that a constrained vocabulary and simplistic story layouts are inherently easier to learn.
## Validation and Performance
The model's performance was evaluated using a held-out validation set, which constitutes 1% of the original dataset. During evaluation, the model achieved a loss of N. During training, the model achieved a loss of N

|
ashmitg/my_awesome_qa_model | ashmitg | 2023-07-20T02:33:05Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-20T02:24:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3801 |
| 2.7532 | 2.0 | 500 | 1.7906 |
| 2.7532 | 3.0 | 750 | 1.7253 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Avitas8485/Dialogpt-small-v1 | Avitas8485 | 2023-07-20T02:25:19Z | 157 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-16T23:38:29Z | ---
language:
- en
pipeline_tag: conversational
--- |
UniverseTBD/falcon-7b-abstracts-tiny | UniverseTBD | 2023-07-20T02:06:34Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"dataset:universeTBD/arxiv-abstracts",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-17T04:42:10Z | ---
datasets:
- universeTBD/arxiv-abstracts
---
# Astronomy hypothesis generation with Falcon-7B
<!-- This model generates astronomy abstracts. -->
It was fine-tuned on several thousand astronomy abstracts collected on Arxiv.
## Model Details
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
online_model = AutoModelForCausalLM.from_pretrained("universeTBD/falcon-7b-abstracts-tiny", torch_dtype=torch.bfloat16,
device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b")
pipeline = transformers.pipeline(
"text-generation",
model=online_model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"### Instruction: Generate a scientific hypothesis about astronomy in the style of an Arxiv paper.\n ### Hypothesis:",
max_length=500,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
def format_output(output):
output = output.replace("\n", " ") # Replace newline characters with spaces
output = output.replace("\\n", " ")
parts = output.split("###") # Split string at '###'
# Get and clean instruction part
instruction = parts[1].strip()
# Get and clean hypothesis part
hypothesis = parts[2].strip()
# Format the output
formatted_output = f"{instruction}\n\n{hypothesis}"
return formatted_output
format_output(sequences[0]['generated_text'])
```
Example generation:
__Using 3D positions and K magnitudes of stars from the Gaia DR2 for which we have spectroscopic information from the RAVE database, we derive distances to the stellar populations in different parts of the bulge of the Milky Way. We find that the metal-rich (blue) stars in the inner part of the bulge have a disk component, while the metal-poor (red) stars in the inner part of the bulge do not have a discernible disk component and are dominated by halo components. Spectral parameters indicate that the red stars are enhanced in nitrogen and the blue stars are enhanced in iron, suggesting that the red stars may have a faster rotation curve than the blue stars. These morpho-chemical properties are similar to those of the classical thick disk populations. However, the inner part of the bulge stars with metallicity about -1.0 <[Fe/H] < -0.5 do not have a discernible disk component and are also found in the halo component. Stars with metallicity about -2.5 <[Fe/H] < -1.0 in the inner part of the bulge also have a faint halo component and are enhanced in nitrogen. We suggest that the metal-rich blue stars in the inner part of the bulge came from a disk formed in situ and the red stars in the inner part of the bulge came from two different disk-to-halo transition zones which may be associated with the late low-density and late high-density spiral arms, respectively.__
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jinouga/temariv1 | Jinouga | 2023-07-20T01:49:02Z | 5 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-19T00:17:55Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### temariV1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
librarian-bots/hub_discussion_topics | librarian-bots | 2023-07-20T01:38:54Z | 7 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2023-07-20T01:38:50Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# hub_issues_topocs
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("davanstrien/hub_issues_topocs")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 156
* Number of training documents: 6427
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | model - version - training - add - base | 10 | Outlier Topic |
| 0 | yes - upscaling - embeddings - dir - 18 | 1785 | Yes Upscaling VAE Embeddings |
| 1 | images - image - img2img - generated - black | 218 | Image Distortion Investigation |
| 2 | languages - language - chinese - support - multilingual | 169 | Multilingual Language Support |
| 3 | request - thesis - checker - request request - work | 103 | DOI request and thesis checker |
| 4 | bloom - 176b - bloomz - bert - 7b1 | 95 | Bloom inference on BERT |
| 5 | api - inference api - hosted - inference - hosted inference | 80 | Configuring Inference API |
| 6 | report report - report - reports - look - awesome | 78 | Awesome Reports |
| 7 | use model - run model - model run - model use - tune model | 73 | Use model instructions |
| 8 | request access - access request - access - request - request requesting | 65 | Access Request Solution |
| 9 | colab - google - google colab - model google - collab | 64 | "Running Galactica on Colab" |
| 10 | json - config json - config - json file - file named | 62 | JSON configuration files |
| 11 | load model - load - model working - unable load - unable | 60 | "Model loading issues" |
| 12 | text - text generation - words - truncated - generation | 57 | Text Generation Techniques |
| 13 | label - labels - tags - classifier - entity | 57 | Document Labels |
| 14 | data - model dataset - dataset - train model - used train | 55 | Model Training Data |
| 15 | issue report - issue - report - 论文 - artists | 55 | Ethical Issues in Artists' Legal Discussion |
| 16 | loading - loading model - error loading - model error - load model | 55 | Model Loading Errors |
| 17 | error error - error - 500 error - connection - unknown error | 49 | Error 500 Connection |
| 18 | train model - train - trained - model did - model trained | 46 | Training models in Arabic |
| 19 | stable diffusion - diffusion - stable - diffusion v1 - diffusion webui | 46 | Stable Diffusion Downloads |
| 20 | question - answers - questions - tts - double | 45 | Question about Fig.2c |
| 21 | length - max - maximum - limit - sequence length | 45 | Length Limits and Token Length |
| 22 | model model - model architecture - generator - architecture - type | 42 | Model Architecture |
| 23 | commercial - license - commercial use - license license - mit | 41 | Commercial Use License |
| 24 | transformers - transformer - sentence transformers - sentence - using transformers | 40 | Issues with sentence transformers |
| 25 | huggingface - hugging face - hugging - face - using hugging | 40 | Hugging Face model usage |
| 26 | legal - legal issue - issue report - issue - report | 40 | Legal Issues Reports |
| 27 | v2 - v3 - anime - wav2vec2 - virus | 40 | Anime Virus Detection Vae |
| 28 | tutorials - thread - tricks - 26 - tips | 39 | Stable Diffusion 26+ Tutorials |
| 29 | difference - fp16 - dpm - opus - opus mt | 39 | Difference between phase1 and phase2 |
| 30 | tokenizer - using from_pretrained - loading - error loading - load | 37 | Tokenizer Loading Error |
| 31 | output - extraction - truncated - summaries - outputs | 37 | Output Extraction |
| 32 | attribute - object - attributeerror - typeerror - string | 36 | AttributeError in object attributes |
| 33 | ckpt file - ckpt - file ckpt - file - ckpt files | 36 | CKPT file location |
| 34 | dataset dataset - dataset - source dataset - datasets - source | 36 | dataset source semantic search |
| 35 | size - mismatch - discrepancy - vocab size - dimensionality | 36 | Size Mismatch Discrepancy |
| 36 | license - license license - permission - agreement - licence | 36 | License Agreement |
| 37 | model card - card - card model - building model - building | 35 | Model Card Typos |
| 38 | demo - space - spaces - gradio - cause | 35 | Troubleshooting Gradio Demo |
| 39 | commercially - does model - commercial - model used - usable | 34 | Commercial Usability of AI Model |
| 40 | automatic1111 - webui - automatic - ui - web ui | 33 | Automatic1111 WebUI |
| 41 | import - transformers - module - failed - export | 33 | ImportError in Transformers Module |
| 42 | example - examples - example use - prompt example - usage example | 33 | Example Usage |
| 43 | audio - noise - spectrogram - second - speaker | 33 | Audio Transcription and Conversion |
| 44 | cool - love - idea - amazing - great | 32 | "cool and amazing" |
| 45 | language model - language - kenlm - lm - multilingual | 32 | Language Model Inference with KenLM |
| 46 | really - nice - cool - love - amazing | 32 | amazing model |
| 47 | sagemaker - endpoint - deployment - deploy - amazon | 32 | Deploying SageMaker Endpoints |
| 48 | training training - training - training steps - general - video | 31 | "Training Steps Video" |
| 49 | tokenizer - problems - masked - tokenizer tokenizer - tokens | 31 | Tokenizer Problems |
| 50 | sd - sd2 - sd sd - does support - wd | 30 | Using SD with Different Versions |
| 51 | test - testing - sampler - discussion - split | 30 | Testing Sampler Discussion |
| 52 | argument - unexpected - keyword - typeerror - got | 30 | Unexpected keyword argument TypeError |
| 53 | float - runtimeerror expected - runtimeerror - expected - type | 30 | RuntimeErrors with Float and Half Types |
| 54 | dataset used - dataset - dataset dataset - used fine - used | 28 | Dataset Usage |
| 55 | json - json file - model architecture - inconsistency - architecture | 28 | JSON file inconsistency |
| 56 | usage - project - app - macos - usage questions | 28 | Usage with Sherpa |
| 57 | reproduce - results - result - civitai - reproducing results | 28 | Reproduce Result Difficulty |
| 58 | gene - cell - question generation - generation - geneformer | 27 | Gene Embedding Generation |
| 59 | gpu - gpus - multiple - gpu run - model multiple | 27 | Multi-GPU Model Execution |
| 60 | tokenizer use - wlop - mean - token - webui version | 26 | Tokenizer for Cantonese |
| 61 | model fine - tuning model - fine tuning - fine - tuning | 26 | Fine-Tuning the Model |
| 62 | model training - training model - training - redshift - model model | 26 | Model Training |
| 63 | bot - discord - tesla - chat - character | 26 | Tesla Discord Bot 2021 |
| 64 | work - doesn work - doesn - dont - does appear | 26 | Non-functional potty lora |
| 65 | use use - use - best - way use - methods | 26 | Best ways to use |
| 66 | report card - metadata - card - report - | 26 | Metadata Report Card |
| 67 | guide - instructions - guidance - prompt - cost | 25 | Fine-tuning guide instructions |
| 68 | code - finetuning code - finetuning - fine tuning - tuning | 25 | Fine-tuning Code Sample |
| 69 | dataset - custom dataset - dataset fine - custom - fine tuning | 25 | Custom dataset fine-tuning |
| 70 | safetensors - safetensor - version - version safetensors - safetensor version | 25 | SafeTensors Version Inquiry |
| 71 | model based - task model - model changes - bring - v7 | 25 | Model Description and Changes |
| 72 | weights - weight - flax - diffusers weights - load weights | 25 | Outdated Flax Weights |
| 73 | style - modern - mode - new - dark mode | 24 | Style in Modern Technology |
| 74 | convert - format - trying convert - safetensors - converter | 24 | Safetensors conversion error |
| 75 | checkpoint - save - checkpoint file - checkpoints - restore | 24 | Checkpoint Safety Restore |
| 76 | t5 - flan t5 - flan - google flan - xxl | 23 | T5 vs Flan-T5 Differences |
| 77 | download model - model load - download - load - model download | 23 | "Model Download" |
| 78 | access access - access - access need - need access - need | 23 | Access Request Assistance |
| 79 | model details - details model - details - information model - model access | 23 | Model Details |
| 80 | job - excellent - nice - great - congrats | 23 | Job Well Done |
| 81 | onnx - conversion - onnx conversion - convert - torchscript | 22 | ONNX Conversion Implementation |
| 82 | git - repository - repo - cloning - slow | 22 | Git repository cloning issues |
| 83 | online - 50 - 200 - buy - annotator | 22 | Buy Medications Online |
| 84 | access - request access - acces request - access request - request | 22 | Access Request |
| 85 | cuda - cuda memory - memory - cuda error - memory cuda | 22 | CUDA memory out of error |
| 86 | api model - api - inference api - model api - trying use | 22 | API Model Errors |
| 87 | training data - data training - data - training dataset - training | 22 | Data Training Examples |
| 88 | pipeline - valid - pipe - sentence similarity - similarity | 21 | Pipeline error analysis |
| 89 | tensor - tensors - device - expected - size | 21 | Tensor size mismatch errors |
| 90 | in_silico_perturber - eos_token_id - switch - 64 - encoder | 21 | Error in decoder generation |
| 91 | pytorch_model - pytorch_model bin - bin - diffusion_pytorch_model bin - diffusion_pytorch_model | 21 | Missing pytorch_model.bin file |
| 92 | 404 - url - https - https huggingface - resolve | 21 | 404 error Huggingface documents |
| 93 | requirements - acess - feature request - request request - feature | 21 | System Requirements Access |
| 94 | info - technical - details - information - detailed | 21 | Technical Details Inquiry |
| 95 | hello - hi - good - translates - 100 | 20 | Greetings and Translations |
| 96 | accuracy - drop - compatibility - precision - half precision | 20 | Accuracy Drop in Precision |
| 97 | access request - request access - access - request - new | 20 | Access Request |
| 98 | file missing - log - filenotfounderror - location - sorry | 20 | File Not Found |
| 99 | model card - card - link model - link - example model | 20 | Broken link in model |
| 100 | python - kernel - 10 - pytorch - talks | 20 | Python usage and errors |
| 101 | bug - fix - racist - possible bug - thing | 19 | Bug Fix with Racist Bug |
| 102 | training code - code training - code - share - share training | 19 | "Training Code Sharing" |
| 103 | license - accept - license license - model accept - indication | 19 | Model License |
| 104 | gpt - protgpt2 - 6b - jt - gpt jt | 19 | GPT-JT-6B-v1 Abilities |
| 105 | report report - report - - - | 19 | Multiple Reports on Topic |
| 106 | tuning fine - tune fine - fine - fine tuning - tuning | 18 | Fine-tuning for domain adaptation |
| 107 | inpaint model - inpaint - ix - size model - model pruned | 18 | Inpaint Model |
| 108 | config file - config - tokenizer config - files config - file | 18 | Config File Troubleshooting |
| 109 | sample code - example - sample - copied - error example | 18 | Issues with sample code |
| 110 | nsfw - nsfw content - content - disable - safety | 18 | NSFW Content Filtering |
| 111 | length - summary - longformer - summary length - text length | 18 | Length of Summaries |
| 112 | access download - access - download - access access - download working | 18 | Access Download |
| 113 | thank - thanks - just want - pretty - request thank | 18 | Thank you efforts |
| 114 | sd v1 - v1 - ema ckpt - sd - ema | 18 | Access to sd-v1-4-full-ema.ckpt |
| 115 | padding_side - tokens - token - cls token - token id | 18 | Padding and token discrepancy |
| 116 | amd - vram - gb - gpu - 448 | 17 | "AMD GPU compatibility" |
| 117 | dataset - pretraining - dataset dataset - datasets - request dataset | 17 | Dataset Pretraining |
| 118 | version - ggml version - version ggml - ggml - pytorch version | 17 | "Version Possibility" |
| 119 | memory - leak - a100 - cuda memory - memory google | 17 | Memory-related Issues |
| 120 | trigger - words - word - trigger word - semantic | 17 | Trigger words and semantic search |
| 121 | result - results - output - score - ways | 16 | Visualizing Inference Results |
| 122 | sd - tested - sd sd - lora training - ui | 16 | Stable Diffusion LORA Training |
| 123 | ckpt file - bin - convert - weights - dreambooth | 16 | Convert Diffusion Diffusers to CKPT |
| 124 | need help - help - help help - need - started | 16 | Need Help Getting Started |
| 125 | keyerror - key - exception error - key error - codegen | 16 | KeyError Troubleshooting |
| 126 | controlnet - control - a1111 - installed - model embedding | 16 | ControlNet not working |
| 127 | implementation - issue - solved - np - experiencing | 16 | Implementation Issue Fix |
| 128 | runtimeerror - time series - everytime - process runtimeerror - try run | 16 | Time Series Runtime Error |
| 129 | use use - use - use readme - use diffusers - tk | 15 | How to use Diffusers |
| 130 | training dataset - dataset used - used dataset - nli - used training | 15 | Training Dataset Used |
| 131 | yaml files - colab pc - install run - diffusion google - train custom | 15 | Stable Diffusion Tutorials |
| 132 | spam - deleted - removed - delete - contact | 15 | Removal of Spam Discussion |
| 133 | details training - details - training - details details - details info | 14 | Training Details |
| 134 | hyper parameters - hyper - parameters - provide - provide training | 14 | Hyperparameter Optimization |
| 135 | fine tune - tune - ner - fine - emotions | 14 | Fine-tune Sentence Embeddings |
| 136 | model using - using model - examples - question lora - models used | 14 | Inkpunk Diffusion model |
| 137 | error running - running - running example - usage code - code | 14 | Error running example code |
| 138 | difference - alpaca - model difference - original model - difference model | 14 | Model Differences |
| 139 | install - locally - know install - run local - mini | 14 | "How to install locally" |
| 140 | training script - script - script training - sharing training - midi | 13 | Training Script |
| 141 | model file - missing model - corrupt - file model - file missing | 13 | Model File Issues |
| 142 | error help - help error - help - solve - try | 13 | Error Help |
| 143 | hardware - hardware requirements - requirements - gpu inference - requirements fine | 13 | Hardware Requirements for Inference |
| 144 | update - updated - channel - expired - new update | 13 | update query status |
| 145 | negative - negative prompt - negative prompts - prompts - prompt | 13 | "Negative Prompt Function" |
| 146 | unable run - unable - run unable - run - human | 13 | Unable to run on local machine |
| 147 | injection - nmkd gui - nmkd - tutorial videos - gui | 12 | Stable Diffusion Tutorial Videos |
| 148 | download download - download - request acces - know download - fim | 12 | "Download Instructions" |
| 149 | transformers - sentence transformers - huggingface transformers - different results - usage | 12 | Transformer Usage Discrepancy |
| 150 | link - broken link - broken - documentation - expired | 11 | Broken links and documentation |
| 151 | broke - padding - dead - kenlm - dropout | 11 | "Dead KenLM Finetuning" |
| 152 | training question - question training - training process - question regarding - question | 11 | Training Process Question |
| 153 | dataset training - training data - training dataset - data training - custom dataset | 11 | Training Data Quality |
| 154 | download - download download - possible download - hd 18 - hd | 11 | Troubleshooting download errors |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.6
|
gmurillo/setfit-keywords-group | gmurillo | 2023-07-20T01:18:20Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bart",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-20T01:15:45Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gmurillo/setfit-keywords-group
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gmurillo/setfit-keywords-group")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
naveenkarakavalasa/t5-small-finetuned-xsum | naveenkarakavalasa | 2023-07-20T01:12:01Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-18T19:38:11Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4782
- Rouge1: 28.2928
- Rouge2: 7.7409
- Rougel: 22.2466
- Rougelsum: 22.2535
- Gen Len: 18.8222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7159 | 1.0 | 12753 | 2.4782 | 28.2928 | 7.7409 | 22.2466 | 22.2535 | 18.8222 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Vezora/WizardOrca-7bv2-lora | Vezora | 2023-07-20T01:09:08Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T00:57:28Z | ---
library_name: peft
---
license: apache-2.0
---
This is a qlora trained llama 7b v2 Adapter trained with 1,250 high quality examples from uncencored WizardOrca dataset+Custom gpt4dataset (examples selected by length).
|4 epochs|
2e-5 learning rate|
1 microbatch size|
128 batch size|
adam8it|2048 tokens|
The model can use the standard llama v2 prompting or use the alpaca chat prompting as this dataset was converted to alpaca format.
Footnotes
---
The model has not lost its ability to interpret 4096 tokens, regardless of this adapter being trained on 2048 tokens.
The model preforms execptionally well based of my preliminary human evaluation.
Benchmarks coming soon.
(trained with oobabooga webui)
https://github.com/oobabooga/text-generation-webui
Main Orginal dataset creator: Psmathur
https://huggingface.co/psmathur |
beomi/KoRWKV-6B | beomi | 2023-07-20T01:07:48Z | 2,631 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"rwkv",
"text-generation",
"KoRWKV",
"ko",
"doi:10.57967/hf/1292",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-26T07:24:57Z | ---
license: mit
language:
- ko
pipeline_tag: text-generation
tags:
- KoRWKV
---
> Instruction-Finetuned model is available at [beomi/KoAlpaca-KoRWKV-6B](https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B)
# KoRWKV Model Card
KoRWKV (6B) trained on Korean dataset with RWKVv4 Neo Architecture.
## Model details
**Researcher developing the model**
Junbum Lee (aka Beomi)
**Model date**
KoRWKV was trained between 2023.05~2023.07
**Model version**
This is 1st release of the model.
**Model type**
Find more about RWKV at https://github.com/BlinkDL/RWKV-LM
**License**
MIT
## Intended use
**Primary intended uses**
The primary use of KoRWKV is research on Korean Opensource large language models
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
KoRWKV is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
KoRWKV is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. |
gowrias12/swin-tiny-patch4-window7-224-finetuned-cac | gowrias12 | 2023-07-20T01:03:06Z | 197 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-20T01:02:10Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-cac
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.36363636363636365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-cac
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0394
- Accuracy: 0.3636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.3354 | 0.1818 |
| No log | 2.0 | 3 | 1.0394 | 0.3636 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rzambrano/ppo-Huggy | rzambrano | 2023-07-20T01:02:53Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-20T01:02:49Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rzambrano/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
EllaHong/ployglot-ko-5.8b-Combined_qlora_4b | EllaHong | 2023-07-20T00:28:08Z | 5 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-20T00:28:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl_tobacco_crl | jordyvl | 2023-07-20T00:08:26Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-19T23:12:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl_tobacco_crl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_tobacco_crl
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5075
- Accuracy: 0.92
- Brier Loss: 0.1544
- Nll: 0.6650
- F1 Micro: 0.92
- F1 Macro: 0.9150
- Ece: 0.1721
- Aurc: 0.0193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.3823 | 0.045 | 0.9050 | 9.6078 | 0.045 | 0.0481 | 0.1570 | 0.9673 |
| No log | 1.96 | 6 | 2.3642 | 0.05 | 0.9005 | 8.5700 | 0.0500 | 0.0549 | 0.1567 | 0.9599 |
| No log | 2.96 | 9 | 2.3130 | 0.095 | 0.8925 | 6.9490 | 0.095 | 0.0853 | 0.1833 | 0.9127 |
| No log | 3.96 | 12 | 2.2603 | 0.265 | 0.8804 | 5.6508 | 0.265 | 0.1642 | 0.2794 | 0.7458 |
| No log | 4.96 | 15 | 2.2077 | 0.38 | 0.8637 | 4.0696 | 0.38 | 0.2272 | 0.3548 | 0.4172 |
| No log | 5.96 | 18 | 2.1176 | 0.47 | 0.8411 | 2.4954 | 0.47 | 0.3062 | 0.4299 | 0.2410 |
| No log | 6.96 | 21 | 2.0268 | 0.64 | 0.8132 | 2.0526 | 0.64 | 0.5126 | 0.5273 | 0.1330 |
| No log | 7.96 | 24 | 1.9258 | 0.735 | 0.7792 | 1.7187 | 0.735 | 0.6337 | 0.5870 | 0.0787 |
| No log | 8.96 | 27 | 1.8114 | 0.77 | 0.7409 | 1.3797 | 0.7700 | 0.6746 | 0.6034 | 0.0556 |
| No log | 9.96 | 30 | 1.7062 | 0.8 | 0.6999 | 1.1402 | 0.8000 | 0.7266 | 0.6005 | 0.0466 |
| No log | 10.96 | 33 | 1.5916 | 0.825 | 0.6548 | 0.9516 | 0.825 | 0.7706 | 0.5882 | 0.0427 |
| No log | 11.96 | 36 | 1.4855 | 0.86 | 0.6103 | 0.8848 | 0.8600 | 0.8201 | 0.5829 | 0.0388 |
| No log | 12.96 | 39 | 1.3944 | 0.87 | 0.5688 | 0.7924 | 0.87 | 0.8361 | 0.5720 | 0.0349 |
| No log | 13.96 | 42 | 1.3176 | 0.895 | 0.5326 | 0.6952 | 0.895 | 0.8740 | 0.5576 | 0.0324 |
| No log | 14.96 | 45 | 1.2435 | 0.9 | 0.4978 | 0.6632 | 0.9 | 0.8838 | 0.5370 | 0.0293 |
| No log | 15.96 | 48 | 1.1760 | 0.915 | 0.4653 | 0.6368 | 0.915 | 0.9034 | 0.5272 | 0.0257 |
| No log | 16.96 | 51 | 1.1101 | 0.915 | 0.4338 | 0.6194 | 0.915 | 0.9011 | 0.4963 | 0.0241 |
| No log | 17.96 | 54 | 1.0518 | 0.915 | 0.4058 | 0.6131 | 0.915 | 0.9011 | 0.4750 | 0.0231 |
| No log | 18.96 | 57 | 1.0011 | 0.915 | 0.3808 | 0.6125 | 0.915 | 0.9011 | 0.4479 | 0.0222 |
| No log | 19.96 | 60 | 0.9471 | 0.92 | 0.3566 | 0.5890 | 0.92 | 0.9102 | 0.4353 | 0.0203 |
| No log | 20.96 | 63 | 0.8962 | 0.915 | 0.3352 | 0.5856 | 0.915 | 0.9047 | 0.4245 | 0.0185 |
| No log | 21.96 | 66 | 0.8635 | 0.92 | 0.3159 | 0.5865 | 0.92 | 0.9115 | 0.3999 | 0.0192 |
| No log | 22.96 | 69 | 0.8333 | 0.93 | 0.2987 | 0.5791 | 0.93 | 0.9260 | 0.3917 | 0.0189 |
| No log | 23.96 | 72 | 0.8079 | 0.925 | 0.2839 | 0.5871 | 0.925 | 0.9159 | 0.3733 | 0.0173 |
| No log | 24.96 | 75 | 0.7644 | 0.93 | 0.2681 | 0.5755 | 0.93 | 0.9233 | 0.3644 | 0.0198 |
| No log | 25.96 | 78 | 0.7443 | 0.925 | 0.2567 | 0.5750 | 0.925 | 0.9204 | 0.3419 | 0.0193 |
| No log | 26.96 | 81 | 0.7250 | 0.93 | 0.2461 | 0.5722 | 0.93 | 0.9227 | 0.3345 | 0.0176 |
| No log | 27.96 | 84 | 0.6988 | 0.93 | 0.2344 | 0.5118 | 0.93 | 0.9227 | 0.3151 | 0.0172 |
| No log | 28.96 | 87 | 0.6923 | 0.935 | 0.2272 | 0.5730 | 0.935 | 0.9303 | 0.3162 | 0.0175 |
| No log | 29.96 | 90 | 0.6752 | 0.935 | 0.2196 | 0.5646 | 0.935 | 0.9303 | 0.3016 | 0.0179 |
| No log | 30.96 | 93 | 0.6576 | 0.93 | 0.2117 | 0.5554 | 0.93 | 0.9227 | 0.2934 | 0.0188 |
| No log | 31.96 | 96 | 0.6476 | 0.93 | 0.2073 | 0.5617 | 0.93 | 0.9227 | 0.2867 | 0.0193 |
| No log | 32.96 | 99 | 0.6349 | 0.93 | 0.2009 | 0.5648 | 0.93 | 0.9245 | 0.2818 | 0.0178 |
| No log | 33.96 | 102 | 0.6195 | 0.92 | 0.1949 | 0.6098 | 0.92 | 0.9140 | 0.2612 | 0.0185 |
| No log | 34.96 | 105 | 0.6158 | 0.92 | 0.1921 | 0.6190 | 0.92 | 0.9140 | 0.2659 | 0.0184 |
| No log | 35.96 | 108 | 0.6093 | 0.93 | 0.1891 | 0.6182 | 0.93 | 0.9273 | 0.2616 | 0.0187 |
| No log | 36.96 | 111 | 0.6007 | 0.925 | 0.1854 | 0.6169 | 0.925 | 0.9170 | 0.2561 | 0.0182 |
| No log | 37.96 | 114 | 0.5877 | 0.925 | 0.1815 | 0.5400 | 0.925 | 0.9170 | 0.2575 | 0.0179 |
| No log | 38.96 | 117 | 0.5887 | 0.925 | 0.1793 | 0.6079 | 0.925 | 0.9170 | 0.2544 | 0.0188 |
| No log | 39.96 | 120 | 0.5865 | 0.915 | 0.1775 | 0.6123 | 0.915 | 0.9107 | 0.2510 | 0.0192 |
| No log | 40.96 | 123 | 0.5753 | 0.925 | 0.1738 | 0.5984 | 0.925 | 0.9230 | 0.2323 | 0.0190 |
| No log | 41.96 | 126 | 0.5727 | 0.92 | 0.1738 | 0.5394 | 0.92 | 0.9140 | 0.2305 | 0.0184 |
| No log | 42.96 | 129 | 0.5644 | 0.92 | 0.1724 | 0.5476 | 0.92 | 0.9140 | 0.2276 | 0.0186 |
| No log | 43.96 | 132 | 0.5597 | 0.92 | 0.1703 | 0.6031 | 0.92 | 0.9140 | 0.2285 | 0.0194 |
| No log | 44.96 | 135 | 0.5597 | 0.92 | 0.1688 | 0.6026 | 0.92 | 0.9140 | 0.2216 | 0.0187 |
| No log | 45.96 | 138 | 0.5580 | 0.925 | 0.1676 | 0.6051 | 0.925 | 0.9170 | 0.2194 | 0.0187 |
| No log | 46.96 | 141 | 0.5541 | 0.925 | 0.1658 | 0.6063 | 0.925 | 0.9170 | 0.2252 | 0.0184 |
| No log | 47.96 | 144 | 0.5533 | 0.925 | 0.1654 | 0.6153 | 0.925 | 0.9170 | 0.2164 | 0.0183 |
| No log | 48.96 | 147 | 0.5464 | 0.925 | 0.1629 | 0.6085 | 0.925 | 0.9170 | 0.2225 | 0.0183 |
| No log | 49.96 | 150 | 0.5407 | 0.925 | 0.1612 | 0.5988 | 0.925 | 0.9170 | 0.2187 | 0.0179 |
| No log | 50.96 | 153 | 0.5432 | 0.92 | 0.1625 | 0.6095 | 0.92 | 0.9150 | 0.2040 | 0.0177 |
| No log | 51.96 | 156 | 0.5425 | 0.915 | 0.1648 | 0.6964 | 0.915 | 0.9118 | 0.1977 | 0.0182 |
| No log | 52.96 | 159 | 0.5376 | 0.915 | 0.1623 | 0.6959 | 0.915 | 0.9118 | 0.2129 | 0.0192 |
| No log | 53.96 | 162 | 0.5299 | 0.915 | 0.1596 | 0.6710 | 0.915 | 0.9118 | 0.2120 | 0.0194 |
| No log | 54.96 | 165 | 0.5240 | 0.92 | 0.1579 | 0.6072 | 0.92 | 0.9150 | 0.2076 | 0.0183 |
| No log | 55.96 | 168 | 0.5297 | 0.92 | 0.1583 | 0.6704 | 0.92 | 0.9150 | 0.1997 | 0.0182 |
| No log | 56.96 | 171 | 0.5307 | 0.915 | 0.1585 | 0.6782 | 0.915 | 0.9118 | 0.2091 | 0.0187 |
| No log | 57.96 | 174 | 0.5257 | 0.925 | 0.1566 | 0.6692 | 0.925 | 0.9180 | 0.1970 | 0.0193 |
| No log | 58.96 | 177 | 0.5281 | 0.925 | 0.1576 | 0.6703 | 0.925 | 0.9180 | 0.2007 | 0.0182 |
| No log | 59.96 | 180 | 0.5282 | 0.92 | 0.1579 | 0.6690 | 0.92 | 0.9150 | 0.1842 | 0.0185 |
| No log | 60.96 | 183 | 0.5212 | 0.92 | 0.1573 | 0.6672 | 0.92 | 0.9150 | 0.1957 | 0.0189 |
| No log | 61.96 | 186 | 0.5203 | 0.92 | 0.1554 | 0.6655 | 0.92 | 0.9207 | 0.1918 | 0.0199 |
| No log | 62.96 | 189 | 0.5166 | 0.915 | 0.1557 | 0.6689 | 0.915 | 0.9118 | 0.1817 | 0.0195 |
| No log | 63.96 | 192 | 0.5168 | 0.915 | 0.1556 | 0.6695 | 0.915 | 0.9118 | 0.1895 | 0.0191 |
| No log | 64.96 | 195 | 0.5153 | 0.915 | 0.1547 | 0.6661 | 0.915 | 0.9118 | 0.1879 | 0.0188 |
| No log | 65.96 | 198 | 0.5157 | 0.915 | 0.1545 | 0.6665 | 0.915 | 0.9118 | 0.1890 | 0.0191 |
| No log | 66.96 | 201 | 0.5181 | 0.915 | 0.1549 | 0.6703 | 0.915 | 0.9118 | 0.1890 | 0.0191 |
| No log | 67.96 | 204 | 0.5168 | 0.915 | 0.1542 | 0.6686 | 0.915 | 0.9118 | 0.1882 | 0.0193 |
| No log | 68.96 | 207 | 0.5120 | 0.93 | 0.1532 | 0.6643 | 0.93 | 0.9269 | 0.1901 | 0.0195 |
| No log | 69.96 | 210 | 0.5091 | 0.92 | 0.1528 | 0.6596 | 0.92 | 0.9150 | 0.1866 | 0.0194 |
| No log | 70.96 | 213 | 0.5093 | 0.92 | 0.1526 | 0.6607 | 0.92 | 0.9150 | 0.1847 | 0.0182 |
| No log | 71.96 | 216 | 0.5143 | 0.925 | 0.1538 | 0.6675 | 0.925 | 0.9180 | 0.1789 | 0.0180 |
| No log | 72.96 | 219 | 0.5145 | 0.925 | 0.1550 | 0.6728 | 0.925 | 0.9180 | 0.1765 | 0.0187 |
| No log | 73.96 | 222 | 0.5090 | 0.92 | 0.1540 | 0.6658 | 0.92 | 0.9150 | 0.1904 | 0.0191 |
| No log | 74.96 | 225 | 0.5069 | 0.92 | 0.1530 | 0.6606 | 0.92 | 0.9150 | 0.1840 | 0.0189 |
| No log | 75.96 | 228 | 0.5051 | 0.92 | 0.1524 | 0.6624 | 0.92 | 0.9150 | 0.1925 | 0.0186 |
| No log | 76.96 | 231 | 0.5089 | 0.92 | 0.1539 | 0.6698 | 0.92 | 0.9150 | 0.1759 | 0.0189 |
| No log | 77.96 | 234 | 0.5053 | 0.92 | 0.1528 | 0.6647 | 0.92 | 0.9150 | 0.1748 | 0.0188 |
| No log | 78.96 | 237 | 0.5028 | 0.92 | 0.1524 | 0.6598 | 0.92 | 0.9150 | 0.1821 | 0.0182 |
| No log | 79.96 | 240 | 0.5043 | 0.92 | 0.1527 | 0.6615 | 0.92 | 0.9150 | 0.1810 | 0.0181 |
| No log | 80.96 | 243 | 0.5014 | 0.92 | 0.1523 | 0.6622 | 0.92 | 0.9150 | 0.1733 | 0.0184 |
| No log | 81.96 | 246 | 0.5035 | 0.92 | 0.1531 | 0.6635 | 0.92 | 0.9150 | 0.1791 | 0.0183 |
| No log | 82.96 | 249 | 0.5052 | 0.92 | 0.1538 | 0.6669 | 0.92 | 0.9150 | 0.1799 | 0.0186 |
| No log | 83.96 | 252 | 0.5040 | 0.92 | 0.1533 | 0.6640 | 0.92 | 0.9150 | 0.1833 | 0.0188 |
| No log | 84.96 | 255 | 0.5008 | 0.92 | 0.1530 | 0.6588 | 0.92 | 0.9150 | 0.1735 | 0.0188 |
| No log | 85.96 | 258 | 0.5027 | 0.915 | 0.1538 | 0.6599 | 0.915 | 0.9121 | 0.1751 | 0.0187 |
| No log | 86.96 | 261 | 0.5075 | 0.915 | 0.1551 | 0.6661 | 0.915 | 0.9121 | 0.1684 | 0.0187 |
| No log | 87.96 | 264 | 0.5107 | 0.92 | 0.1555 | 0.6734 | 0.92 | 0.9150 | 0.1748 | 0.0186 |
| No log | 88.96 | 267 | 0.5035 | 0.92 | 0.1534 | 0.6676 | 0.92 | 0.9150 | 0.1810 | 0.0192 |
| No log | 89.96 | 270 | 0.5006 | 0.92 | 0.1523 | 0.6624 | 0.92 | 0.9150 | 0.1867 | 0.0200 |
| No log | 90.96 | 273 | 0.4984 | 0.92 | 0.1521 | 0.6605 | 0.92 | 0.9150 | 0.1704 | 0.0201 |
| No log | 91.96 | 276 | 0.4976 | 0.92 | 0.1518 | 0.6586 | 0.92 | 0.9150 | 0.1702 | 0.0201 |
| No log | 92.96 | 279 | 0.4986 | 0.92 | 0.1520 | 0.6584 | 0.92 | 0.9150 | 0.1701 | 0.0201 |
| No log | 93.96 | 282 | 0.5005 | 0.92 | 0.1526 | 0.6596 | 0.92 | 0.9150 | 0.1714 | 0.0201 |
| No log | 94.96 | 285 | 0.5025 | 0.92 | 0.1533 | 0.6614 | 0.92 | 0.9150 | 0.1820 | 0.0202 |
| No log | 95.96 | 288 | 0.5043 | 0.92 | 0.1539 | 0.6634 | 0.92 | 0.9150 | 0.1721 | 0.0195 |
| No log | 96.96 | 291 | 0.5056 | 0.92 | 0.1542 | 0.6644 | 0.92 | 0.9150 | 0.1783 | 0.0194 |
| No log | 97.96 | 294 | 0.5075 | 0.92 | 0.1544 | 0.6648 | 0.92 | 0.9150 | 0.1723 | 0.0194 |
| No log | 98.96 | 297 | 0.5077 | 0.92 | 0.1544 | 0.6649 | 0.92 | 0.9150 | 0.1722 | 0.0194 |
| No log | 99.96 | 300 | 0.5075 | 0.92 | 0.1544 | 0.6650 | 0.92 | 0.9150 | 0.1721 | 0.0193 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Mel-Iza0/RedPajama-ZeroShot-20K-classe_empty | Mel-Iza0 | 2023-07-19T23:59:42Z | 0 | 0 | peft | [
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-19T21:19:42Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jordyvl/225-tiny_tobacco3482_kd | jordyvl | 2023-07-19T23:57:40Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-19T10:57:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 225-tiny_tobacco3482_kd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 225-tiny_tobacco3482_kd
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2991
- Accuracy: 0.775
- Brier Loss: 0.3491
- Nll: 1.2196
- F1 Micro: 0.775
- F1 Macro: 0.7302
- Ece: 0.2602
- Aurc: 0.0644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 1.6344 | 0.23 | 0.8900 | 7.7885 | 0.23 | 0.1633 | 0.2747 | 0.7585 |
| No log | 2.0 | 26 | 1.0824 | 0.385 | 0.7943 | 4.1668 | 0.3850 | 0.2795 | 0.3160 | 0.4560 |
| No log | 3.0 | 39 | 0.8639 | 0.535 | 0.6762 | 3.0145 | 0.535 | 0.4086 | 0.3233 | 0.2913 |
| No log | 4.0 | 52 | 0.7309 | 0.595 | 0.5956 | 2.2236 | 0.595 | 0.4646 | 0.3075 | 0.1944 |
| No log | 5.0 | 65 | 0.6374 | 0.67 | 0.5211 | 2.1759 | 0.67 | 0.5737 | 0.2898 | 0.1450 |
| No log | 6.0 | 78 | 0.6720 | 0.685 | 0.4833 | 2.2861 | 0.685 | 0.5860 | 0.2904 | 0.1331 |
| No log | 7.0 | 91 | 0.6097 | 0.675 | 0.4767 | 2.3133 | 0.675 | 0.5733 | 0.2622 | 0.1519 |
| No log | 8.0 | 104 | 0.5206 | 0.705 | 0.4301 | 1.8228 | 0.705 | 0.6164 | 0.2603 | 0.1038 |
| No log | 9.0 | 117 | 0.5486 | 0.715 | 0.4414 | 1.8451 | 0.715 | 0.6444 | 0.2583 | 0.1063 |
| No log | 10.0 | 130 | 0.5067 | 0.7 | 0.4171 | 1.7759 | 0.7 | 0.6325 | 0.2611 | 0.1071 |
| No log | 11.0 | 143 | 0.4612 | 0.745 | 0.4017 | 1.4919 | 0.745 | 0.6635 | 0.2840 | 0.0838 |
| No log | 12.0 | 156 | 0.4785 | 0.745 | 0.4204 | 1.8579 | 0.745 | 0.6750 | 0.2542 | 0.0979 |
| No log | 13.0 | 169 | 0.4518 | 0.715 | 0.4036 | 1.5697 | 0.715 | 0.6496 | 0.2744 | 0.1002 |
| No log | 14.0 | 182 | 0.5081 | 0.7 | 0.4294 | 1.9850 | 0.7 | 0.6514 | 0.2364 | 0.1225 |
| No log | 15.0 | 195 | 0.4415 | 0.705 | 0.3994 | 1.7828 | 0.705 | 0.6301 | 0.2380 | 0.0992 |
| No log | 16.0 | 208 | 0.3859 | 0.73 | 0.3832 | 1.3431 | 0.7300 | 0.6516 | 0.2548 | 0.0817 |
| No log | 17.0 | 221 | 0.3869 | 0.75 | 0.3832 | 1.2075 | 0.75 | 0.6651 | 0.2622 | 0.0758 |
| No log | 18.0 | 234 | 0.3637 | 0.755 | 0.3770 | 1.2290 | 0.755 | 0.7108 | 0.2569 | 0.0687 |
| No log | 19.0 | 247 | 0.3933 | 0.745 | 0.3700 | 1.4931 | 0.745 | 0.6812 | 0.2434 | 0.0799 |
| No log | 20.0 | 260 | 0.3540 | 0.745 | 0.3721 | 1.1910 | 0.745 | 0.6702 | 0.2208 | 0.0760 |
| No log | 21.0 | 273 | 0.3560 | 0.77 | 0.3718 | 1.1248 | 0.7700 | 0.7142 | 0.2731 | 0.0743 |
| No log | 22.0 | 286 | 0.3530 | 0.74 | 0.3758 | 1.4213 | 0.74 | 0.6902 | 0.2326 | 0.0768 |
| No log | 23.0 | 299 | 0.3419 | 0.745 | 0.3699 | 1.2528 | 0.745 | 0.6714 | 0.2324 | 0.0765 |
| No log | 24.0 | 312 | 0.3302 | 0.775 | 0.3595 | 1.3338 | 0.775 | 0.7120 | 0.2521 | 0.0665 |
| No log | 25.0 | 325 | 0.3533 | 0.775 | 0.3672 | 1.4609 | 0.775 | 0.7167 | 0.2482 | 0.0740 |
| No log | 26.0 | 338 | 0.3416 | 0.775 | 0.3684 | 1.1575 | 0.775 | 0.7124 | 0.2601 | 0.0732 |
| No log | 27.0 | 351 | 0.3463 | 0.75 | 0.3714 | 1.1053 | 0.75 | 0.6868 | 0.2512 | 0.0808 |
| No log | 28.0 | 364 | 0.3298 | 0.775 | 0.3605 | 1.2108 | 0.775 | 0.6986 | 0.2537 | 0.0668 |
| No log | 29.0 | 377 | 0.3278 | 0.77 | 0.3645 | 1.1893 | 0.7700 | 0.7013 | 0.2447 | 0.0765 |
| No log | 30.0 | 390 | 0.3165 | 0.78 | 0.3608 | 1.1615 | 0.78 | 0.7285 | 0.2472 | 0.0712 |
| No log | 31.0 | 403 | 0.3212 | 0.765 | 0.3571 | 1.1317 | 0.765 | 0.6999 | 0.2497 | 0.0725 |
| No log | 32.0 | 416 | 0.3119 | 0.765 | 0.3581 | 1.0644 | 0.765 | 0.6881 | 0.2285 | 0.0675 |
| No log | 33.0 | 429 | 0.3229 | 0.765 | 0.3523 | 1.2937 | 0.765 | 0.7138 | 0.2517 | 0.0658 |
| No log | 34.0 | 442 | 0.3193 | 0.78 | 0.3660 | 1.1849 | 0.78 | 0.7329 | 0.2686 | 0.0700 |
| No log | 35.0 | 455 | 0.3088 | 0.775 | 0.3556 | 1.1613 | 0.775 | 0.7071 | 0.2640 | 0.0680 |
| No log | 36.0 | 468 | 0.3113 | 0.785 | 0.3508 | 1.1715 | 0.785 | 0.7501 | 0.2443 | 0.0656 |
| No log | 37.0 | 481 | 0.3113 | 0.79 | 0.3526 | 1.2334 | 0.79 | 0.7388 | 0.2580 | 0.0639 |
| No log | 38.0 | 494 | 0.3077 | 0.755 | 0.3528 | 1.1152 | 0.755 | 0.6973 | 0.2401 | 0.0692 |
| 0.2783 | 39.0 | 507 | 0.3064 | 0.775 | 0.3567 | 1.2289 | 0.775 | 0.7370 | 0.2417 | 0.0696 |
| 0.2783 | 40.0 | 520 | 0.3063 | 0.77 | 0.3521 | 1.2437 | 0.7700 | 0.7232 | 0.2396 | 0.0688 |
| 0.2783 | 41.0 | 533 | 0.3042 | 0.77 | 0.3541 | 1.2490 | 0.7700 | 0.7234 | 0.2470 | 0.0682 |
| 0.2783 | 42.0 | 546 | 0.2999 | 0.77 | 0.3486 | 1.1626 | 0.7700 | 0.7082 | 0.2491 | 0.0638 |
| 0.2783 | 43.0 | 559 | 0.3020 | 0.77 | 0.3515 | 1.2141 | 0.7700 | 0.7312 | 0.2570 | 0.0687 |
| 0.2783 | 44.0 | 572 | 0.3024 | 0.775 | 0.3502 | 1.2184 | 0.775 | 0.7168 | 0.2568 | 0.0648 |
| 0.2783 | 45.0 | 585 | 0.3002 | 0.78 | 0.3517 | 1.2189 | 0.78 | 0.7364 | 0.2673 | 0.0644 |
| 0.2783 | 46.0 | 598 | 0.3022 | 0.775 | 0.3511 | 1.1594 | 0.775 | 0.7266 | 0.2538 | 0.0661 |
| 0.2783 | 47.0 | 611 | 0.2974 | 0.775 | 0.3464 | 1.2157 | 0.775 | 0.7238 | 0.2630 | 0.0627 |
| 0.2783 | 48.0 | 624 | 0.3003 | 0.78 | 0.3519 | 1.1584 | 0.78 | 0.7318 | 0.2413 | 0.0666 |
| 0.2783 | 49.0 | 637 | 0.2990 | 0.77 | 0.3492 | 1.2187 | 0.7700 | 0.7136 | 0.2401 | 0.0643 |
| 0.2783 | 50.0 | 650 | 0.3019 | 0.765 | 0.3516 | 1.2254 | 0.765 | 0.7180 | 0.2409 | 0.0673 |
| 0.2783 | 51.0 | 663 | 0.2991 | 0.77 | 0.3499 | 1.2186 | 0.7700 | 0.7145 | 0.2566 | 0.0646 |
| 0.2783 | 52.0 | 676 | 0.2990 | 0.77 | 0.3507 | 1.2204 | 0.7700 | 0.7207 | 0.2360 | 0.0651 |
| 0.2783 | 53.0 | 689 | 0.2982 | 0.765 | 0.3488 | 1.1663 | 0.765 | 0.7042 | 0.2338 | 0.0643 |
| 0.2783 | 54.0 | 702 | 0.2969 | 0.775 | 0.3485 | 1.1667 | 0.775 | 0.7302 | 0.2586 | 0.0642 |
| 0.2783 | 55.0 | 715 | 0.2989 | 0.775 | 0.3487 | 1.2181 | 0.775 | 0.7302 | 0.2670 | 0.0647 |
| 0.2783 | 56.0 | 728 | 0.2991 | 0.77 | 0.3499 | 1.2208 | 0.7700 | 0.7136 | 0.2339 | 0.0650 |
| 0.2783 | 57.0 | 741 | 0.2986 | 0.775 | 0.3487 | 1.2162 | 0.775 | 0.7302 | 0.2415 | 0.0639 |
| 0.2783 | 58.0 | 754 | 0.2985 | 0.77 | 0.3490 | 1.2183 | 0.7700 | 0.7207 | 0.2547 | 0.0647 |
| 0.2783 | 59.0 | 767 | 0.2993 | 0.77 | 0.3494 | 1.2218 | 0.7700 | 0.7136 | 0.2417 | 0.0649 |
| 0.2783 | 60.0 | 780 | 0.2983 | 0.77 | 0.3487 | 1.2185 | 0.7700 | 0.7207 | 0.2555 | 0.0646 |
| 0.2783 | 61.0 | 793 | 0.2989 | 0.775 | 0.3492 | 1.2182 | 0.775 | 0.7302 | 0.2444 | 0.0645 |
| 0.2783 | 62.0 | 806 | 0.2987 | 0.775 | 0.3487 | 1.2174 | 0.775 | 0.7302 | 0.2438 | 0.0642 |
| 0.2783 | 63.0 | 819 | 0.2987 | 0.775 | 0.3490 | 1.2198 | 0.775 | 0.7302 | 0.2508 | 0.0646 |
| 0.2783 | 64.0 | 832 | 0.2989 | 0.775 | 0.3494 | 1.2195 | 0.775 | 0.7302 | 0.2609 | 0.0646 |
| 0.2783 | 65.0 | 845 | 0.2990 | 0.775 | 0.3492 | 1.2177 | 0.775 | 0.7302 | 0.2528 | 0.0644 |
| 0.2783 | 66.0 | 858 | 0.2992 | 0.775 | 0.3493 | 1.2193 | 0.775 | 0.7302 | 0.2537 | 0.0646 |
| 0.2783 | 67.0 | 871 | 0.2990 | 0.775 | 0.3493 | 1.2199 | 0.775 | 0.7302 | 0.2510 | 0.0647 |
| 0.2783 | 68.0 | 884 | 0.2991 | 0.775 | 0.3495 | 1.2199 | 0.775 | 0.7302 | 0.2476 | 0.0646 |
| 0.2783 | 69.0 | 897 | 0.2989 | 0.775 | 0.3491 | 1.2187 | 0.775 | 0.7302 | 0.2606 | 0.0646 |
| 0.2783 | 70.0 | 910 | 0.2987 | 0.775 | 0.3490 | 1.2187 | 0.775 | 0.7302 | 0.2436 | 0.0642 |
| 0.2783 | 71.0 | 923 | 0.2990 | 0.775 | 0.3491 | 1.2190 | 0.775 | 0.7302 | 0.2510 | 0.0646 |
| 0.2783 | 72.0 | 936 | 0.2990 | 0.775 | 0.3492 | 1.2191 | 0.775 | 0.7302 | 0.2541 | 0.0646 |
| 0.2783 | 73.0 | 949 | 0.2990 | 0.775 | 0.3491 | 1.2176 | 0.775 | 0.7302 | 0.2509 | 0.0647 |
| 0.2783 | 74.0 | 962 | 0.2990 | 0.775 | 0.3493 | 1.2203 | 0.775 | 0.7302 | 0.2600 | 0.0643 |
| 0.2783 | 75.0 | 975 | 0.2989 | 0.775 | 0.3492 | 1.2203 | 0.775 | 0.7302 | 0.2665 | 0.0643 |
| 0.2783 | 76.0 | 988 | 0.2991 | 0.775 | 0.3492 | 1.2193 | 0.775 | 0.7302 | 0.2601 | 0.0643 |
| 0.0005 | 77.0 | 1001 | 0.2991 | 0.775 | 0.3491 | 1.2201 | 0.775 | 0.7302 | 0.2598 | 0.0645 |
| 0.0005 | 78.0 | 1014 | 0.2991 | 0.775 | 0.3490 | 1.2198 | 0.775 | 0.7302 | 0.2441 | 0.0645 |
| 0.0005 | 79.0 | 1027 | 0.2991 | 0.775 | 0.3492 | 1.2182 | 0.775 | 0.7302 | 0.2513 | 0.0645 |
| 0.0005 | 80.0 | 1040 | 0.2992 | 0.775 | 0.3491 | 1.2183 | 0.775 | 0.7302 | 0.2514 | 0.0645 |
| 0.0005 | 81.0 | 1053 | 0.2992 | 0.775 | 0.3492 | 1.2196 | 0.775 | 0.7302 | 0.2584 | 0.0646 |
| 0.0005 | 82.0 | 1066 | 0.2992 | 0.775 | 0.3493 | 1.2199 | 0.775 | 0.7302 | 0.2520 | 0.0646 |
| 0.0005 | 83.0 | 1079 | 0.2991 | 0.775 | 0.3491 | 1.2191 | 0.775 | 0.7302 | 0.2514 | 0.0643 |
| 0.0005 | 84.0 | 1092 | 0.2991 | 0.775 | 0.3491 | 1.2194 | 0.775 | 0.7302 | 0.2516 | 0.0645 |
| 0.0005 | 85.0 | 1105 | 0.2990 | 0.775 | 0.3491 | 1.2188 | 0.775 | 0.7302 | 0.2585 | 0.0645 |
| 0.0005 | 86.0 | 1118 | 0.2991 | 0.775 | 0.3492 | 1.2193 | 0.775 | 0.7302 | 0.2584 | 0.0645 |
| 0.0005 | 87.0 | 1131 | 0.2991 | 0.775 | 0.3491 | 1.2201 | 0.775 | 0.7302 | 0.2667 | 0.0643 |
| 0.0005 | 88.0 | 1144 | 0.2991 | 0.775 | 0.3492 | 1.2199 | 0.775 | 0.7302 | 0.2516 | 0.0645 |
| 0.0005 | 89.0 | 1157 | 0.2990 | 0.775 | 0.3491 | 1.2193 | 0.775 | 0.7302 | 0.2603 | 0.0644 |
| 0.0005 | 90.0 | 1170 | 0.2990 | 0.775 | 0.3492 | 1.2197 | 0.775 | 0.7302 | 0.2536 | 0.0645 |
| 0.0005 | 91.0 | 1183 | 0.2990 | 0.775 | 0.3491 | 1.2201 | 0.775 | 0.7302 | 0.2668 | 0.0644 |
| 0.0005 | 92.0 | 1196 | 0.2991 | 0.775 | 0.3491 | 1.2190 | 0.775 | 0.7302 | 0.2533 | 0.0644 |
| 0.0005 | 93.0 | 1209 | 0.2991 | 0.775 | 0.3492 | 1.2192 | 0.775 | 0.7302 | 0.2602 | 0.0645 |
| 0.0005 | 94.0 | 1222 | 0.2991 | 0.775 | 0.3492 | 1.2193 | 0.775 | 0.7302 | 0.2533 | 0.0645 |
| 0.0005 | 95.0 | 1235 | 0.2991 | 0.775 | 0.3491 | 1.2192 | 0.775 | 0.7302 | 0.2533 | 0.0644 |
| 0.0005 | 96.0 | 1248 | 0.2991 | 0.775 | 0.3491 | 1.2196 | 0.775 | 0.7302 | 0.2668 | 0.0644 |
| 0.0005 | 97.0 | 1261 | 0.2991 | 0.775 | 0.3492 | 1.2196 | 0.775 | 0.7302 | 0.2602 | 0.0644 |
| 0.0005 | 98.0 | 1274 | 0.2991 | 0.775 | 0.3491 | 1.2194 | 0.775 | 0.7302 | 0.2533 | 0.0644 |
| 0.0005 | 99.0 | 1287 | 0.2991 | 0.775 | 0.3491 | 1.2195 | 0.775 | 0.7302 | 0.2602 | 0.0644 |
| 0.0005 | 100.0 | 1300 | 0.2991 | 0.775 | 0.3491 | 1.2196 | 0.775 | 0.7302 | 0.2602 | 0.0644 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu | gokuls | 2023-07-19T23:50:40Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-17T16:22:45Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4927
- Accuracy: 0.2227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 7.1222 | 0.08 | 10000 | 7.0986 | 0.0793 |
| 6.6978 | 0.16 | 20000 | 6.6919 | 0.1061 |
| 6.5525 | 0.25 | 30000 | 6.5486 | 0.1185 |
| 6.4684 | 0.33 | 40000 | 6.4581 | 0.1251 |
| 6.3964 | 0.41 | 50000 | 6.3927 | 0.1302 |
| 6.3478 | 0.49 | 60000 | 6.3393 | 0.1342 |
| 6.3036 | 0.57 | 70000 | 6.3004 | 0.1366 |
| 6.2727 | 0.66 | 80000 | 6.2671 | 0.1401 |
| 6.2394 | 0.74 | 90000 | 6.2365 | 0.1413 |
| 6.2124 | 0.82 | 100000 | 6.2146 | 0.1430 |
| 6.1946 | 0.9 | 110000 | 6.1936 | 0.1438 |
| 6.1769 | 0.98 | 120000 | 6.1724 | 0.1456 |
| 6.1466 | 1.07 | 130000 | 6.1497 | 0.1466 |
| 6.1217 | 1.15 | 140000 | 6.1160 | 0.1483 |
| 6.0912 | 1.23 | 150000 | 6.0844 | 0.1502 |
| 6.0452 | 1.31 | 160000 | 6.0317 | 0.1547 |
| 5.981 | 1.39 | 170000 | 5.9714 | 0.1596 |
| 5.9314 | 1.47 | 180000 | 5.9204 | 0.1641 |
| 5.8777 | 1.56 | 190000 | 5.8723 | 0.1690 |
| 5.7356 | 1.64 | 200000 | 5.7081 | 0.1907 |
| 5.5391 | 1.72 | 210000 | 5.4927 | 0.2227 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca | Andron00e | 2023-07-19T23:17:06Z | 1,456 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"question-answering",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-18T10:02:03Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
library_name: transformers
pipeline_tag: question-answering
metrics:
- accuracy
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Andron00e
- **Language(s) (NLP):** Python (PyTorch, transformers, peft)
- **License:** apache-2.0
- **Finetuned from model:** openlm-research/open_llama_3b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Andron00e/Fine-Tuning-project
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Open-Orca/OpenOrca
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Evaluation of the model was carried out using EulerAI library, more [precisely](https://github.com/EleutherAI/lm-evaluation-harness/tree/e47e01beea79cfe87421e2dac49e64d499c240b4#task-versioning)
#### Testing Data
<!-- This should link to a Data Card if possible. -->
hellaswag testing dataset
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Accuracy
### Results and Model Examination
| Task |Version| Metric |Value | |Stderr|
|---------|------:|--------|-----:|---|-----:|
|hellaswag| 0|acc |0.4899|± |0.0050|
| | |acc_norm|0.6506|± |0.0048|
## Citations
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
```
## Model Card Authors and Contact
[Andron00e](https://github.com/Andron00e) |
LarryAIDraw/chihiro | LarryAIDraw | 2023-07-19T23:08:40Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-19T23:03:02Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/26324/chihiroblue-archive |
snicolau/ppo-SnowballTarget | snicolau | 2023-07-19T22:55:15Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-19T22:55:11Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: snicolau/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TaylorAI/Llama2-7B-SFT-LIMA-ct2 | TaylorAI | 2023-07-19T22:55:05Z | 3 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-07-19T16:01:55Z | This is a quantized version of Llama2-7B trained on the LIMA (Less is More for Alignment) dataset, located at `GAIR/lima` on HuggingFace.
To get started using this model, you'll need to install `transformers` (for the tokenizer) and `ctranslate2` (for the model). You'll
also need `huggingface_hub` to easily download the weights.
```
pip install -U transformers ctranslate2 huggingface_hub
```
Next, download this repository from the Hub. You can download the files manually and place them in a folder, or use the HuggingFace library
to download them programatically. Here, we're putting them in a local directory called "Llama2_TaylorAI".
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="TaylorAI/Llama2-7B-SFT-LIMA-ct2", local_dir="Llama2_TaylorAI")
```
Then, you can perform inference as follows. Note that the model was trained with the separator `\n\n###\n\n` between the prompt/instruction
and the model's response, so to get the expected result, you'll want to append this to your prompt. The model was also trained to finish its
output with the suffix `@@@`, so you can stop generating tokens once you reach this suffix, or use it to split the completion and keep the
relevant part. All of this is shown in the example below.
```
from ctranslate2 import Generator
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TaylorAI/Llama2-7B-SFT-LIMA-ct2")
# point this wherever you stored this repository. if you have a GPU, use device="cuda", otherwise "cpu"
model = Generator("Llama2_TaylorAI", device="cuda")
# Unlike normal Transformers models, Ctranslate2 operates on actual "tokens" (little subword strings), not token ids (integers)
def tokenize_for_ct2(
prompt: str,
prompt_suffix: str,
tokenizer: Any,
):
full_prompt = prompt + prompt_suffix
input_ids = tokenizer.encode(full_prompt)
input_tokens = tokenizer.convert_ids_to_tokens(input_ids)
return input_tokens
example_input = "What is the meaning of life?"
example_input_tokens = tokenize_for_ct2(example_input, prompt_suffix="\n\n###\n\n", tokenizer=tokenizer)
# the model returns an iterator, from which we can lazily stream tokens
result = []
it = model.generate_tokens(
example_input_tokens,
max_length=1024,
sampling_topp=0.9,
sampling_temperature=1.0,
repetition_penalty=1.5
)
stop_sequence = "@@@"
for step in it:
result.append(step.token_id)
# stop early if we have generated the suffix
output_so_far = tokenizer.decode(completion_tokens, skip_special_tokens=True)
if output_so_far.endswith(stop_sequence):
break
output = tokenizer.decode(completion_tokens, skip_special_tokens=True).split(stop_sequence)[0]
print(output)
```
|
PhysHunter/bert-finetuned-squad | PhysHunter | 2023-07-19T22:53:16Z | 131 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-19T20:37:55Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LarryAIDraw/bambietta-05 | LarryAIDraw | 2023-07-19T22:51:05Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-19T22:43:29Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/112104/bambietta-or-bleach-or-lora-or-pop-waifu-series-usable-as-outfit-lora |
SAint7579/orpheus_ldm_model_v1-0 | SAint7579 | 2023-07-19T22:40:29Z | 15 | 0 | diffusers | [
"diffusers",
"tensorboard",
"music",
"audio-to-audio",
"en",
"dataset:SAint7579/orpheus_samples",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | audio-to-audio | 2023-07-16T17:48:47Z | ---
datasets:
- SAint7579/orpheus_samples
language:
- en
library_name: diffusers
pipeline_tag: audio-to-audio
tags:
- music
--- |
akdeniz27/ppo-Pyramids | akdeniz27 | 2023-07-19T22:18:47Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-19T22:18:40Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: akdeniz27/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/cbt-guten-rarity-all-mixed-cut-2p6k | NasimB | 2023-07-19T22:16:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-19T20:30:34Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-guten-rarity-all-mixed-cut-2p6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-guten-rarity-all-mixed-cut-2p6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6926 | 0.29 | 500 | 5.6317 |
| 5.3392 | 0.59 | 1000 | 5.1955 |
| 5.0004 | 0.88 | 1500 | 4.9509 |
| 4.7275 | 1.17 | 2000 | 4.8088 |
| 4.5644 | 1.46 | 2500 | 4.6844 |
| 4.4518 | 1.76 | 3000 | 4.5759 |
| 4.3336 | 2.05 | 3500 | 4.5021 |
| 4.137 | 2.34 | 4000 | 4.4540 |
| 4.1052 | 2.63 | 4500 | 4.3960 |
| 4.0754 | 2.93 | 5000 | 4.3380 |
| 3.8637 | 3.22 | 5500 | 4.3349 |
| 3.8081 | 3.51 | 6000 | 4.3053 |
| 3.7909 | 3.8 | 6500 | 4.2708 |
| 3.6948 | 4.1 | 7000 | 4.2703 |
| 3.5239 | 4.39 | 7500 | 4.2668 |
| 3.5181 | 4.68 | 8000 | 4.2517 |
| 3.5066 | 4.97 | 8500 | 4.2362 |
| 3.3432 | 5.27 | 9000 | 4.2494 |
| 3.3256 | 5.56 | 9500 | 4.2492 |
| 3.3315 | 5.85 | 10000 | 4.2485 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jordyvl/vit-base_tobacco_crl | jordyvl | 2023-07-19T22:11:53Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-19T19:55:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_crl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_crl
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8584
- Accuracy: 0.8
- Brier Loss: 0.3083
- Nll: 1.3299
- F1 Micro: 0.8000
- F1 Macro: 0.7728
- Ece: 0.2079
- Aurc: 0.0851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 0.8895 | 0.82 | 0.3092 | 1.1901 | 0.82 | 0.8049 | 0.2293 | 0.0751 |
| No log | 1.96 | 6 | 0.8886 | 0.81 | 0.3071 | 1.1861 | 0.81 | 0.7912 | 0.2245 | 0.0705 |
| No log | 2.96 | 9 | 0.8747 | 0.815 | 0.3065 | 1.1876 | 0.815 | 0.8021 | 0.2265 | 0.0734 |
| No log | 3.96 | 12 | 0.8812 | 0.805 | 0.3085 | 1.2661 | 0.805 | 0.7783 | 0.2087 | 0.0761 |
| No log | 4.96 | 15 | 0.8874 | 0.81 | 0.3080 | 1.1831 | 0.81 | 0.7871 | 0.2325 | 0.0786 |
| No log | 5.96 | 18 | 0.8818 | 0.81 | 0.3089 | 1.2715 | 0.81 | 0.7960 | 0.2345 | 0.0788 |
| No log | 6.96 | 21 | 0.8790 | 0.81 | 0.3045 | 1.2619 | 0.81 | 0.7904 | 0.2235 | 0.0693 |
| No log | 7.96 | 24 | 0.8794 | 0.805 | 0.3084 | 1.2566 | 0.805 | 0.7884 | 0.2205 | 0.0787 |
| No log | 8.96 | 27 | 0.8838 | 0.815 | 0.3134 | 1.3380 | 0.815 | 0.8072 | 0.2230 | 0.0751 |
| No log | 9.96 | 30 | 0.8849 | 0.8 | 0.3132 | 1.3205 | 0.8000 | 0.7757 | 0.2229 | 0.0824 |
| No log | 10.96 | 33 | 0.8633 | 0.81 | 0.3061 | 1.3978 | 0.81 | 0.7938 | 0.2004 | 0.0756 |
| No log | 11.96 | 36 | 0.8746 | 0.81 | 0.3089 | 1.3970 | 0.81 | 0.7918 | 0.2346 | 0.0741 |
| No log | 12.96 | 39 | 0.8625 | 0.805 | 0.3078 | 1.1961 | 0.805 | 0.7945 | 0.2505 | 0.0854 |
| No log | 13.96 | 42 | 0.8636 | 0.815 | 0.3068 | 1.2113 | 0.815 | 0.8046 | 0.2371 | 0.0804 |
| No log | 14.96 | 45 | 0.8906 | 0.79 | 0.3157 | 1.3748 | 0.79 | 0.7777 | 0.2279 | 0.0847 |
| No log | 15.96 | 48 | 0.8601 | 0.805 | 0.3040 | 1.2977 | 0.805 | 0.7876 | 0.2176 | 0.0805 |
| No log | 16.96 | 51 | 0.8606 | 0.815 | 0.3083 | 1.4136 | 0.815 | 0.8077 | 0.2279 | 0.0787 |
| No log | 17.96 | 54 | 0.9013 | 0.8 | 0.3261 | 1.2494 | 0.8000 | 0.7886 | 0.2194 | 0.0871 |
| No log | 18.96 | 57 | 0.8653 | 0.805 | 0.3143 | 1.4166 | 0.805 | 0.7935 | 0.2170 | 0.0786 |
| No log | 19.96 | 60 | 0.8459 | 0.81 | 0.3030 | 1.2629 | 0.81 | 0.7953 | 0.2129 | 0.0892 |
| No log | 20.96 | 63 | 0.8689 | 0.795 | 0.3106 | 1.2823 | 0.795 | 0.7725 | 0.2099 | 0.0828 |
| No log | 21.96 | 66 | 0.8563 | 0.81 | 0.3016 | 1.2789 | 0.81 | 0.7954 | 0.2324 | 0.0742 |
| No log | 22.96 | 69 | 0.8998 | 0.785 | 0.3231 | 1.6511 | 0.785 | 0.7642 | 0.2178 | 0.1015 |
| No log | 23.96 | 72 | 0.8338 | 0.805 | 0.2971 | 1.0504 | 0.805 | 0.7868 | 0.2135 | 0.0645 |
| No log | 24.96 | 75 | 0.8423 | 0.8 | 0.3040 | 1.4777 | 0.8000 | 0.7771 | 0.2283 | 0.0689 |
| No log | 25.96 | 78 | 0.8775 | 0.8 | 0.3218 | 1.4206 | 0.8000 | 0.7774 | 0.2204 | 0.1120 |
| No log | 26.96 | 81 | 0.8389 | 0.8 | 0.2984 | 1.1946 | 0.8000 | 0.7771 | 0.1990 | 0.0737 |
| No log | 27.96 | 84 | 0.9119 | 0.795 | 0.3319 | 1.6978 | 0.795 | 0.7805 | 0.2279 | 0.1109 |
| No log | 28.96 | 87 | 0.8689 | 0.805 | 0.3144 | 1.2644 | 0.805 | 0.7971 | 0.2216 | 0.0787 |
| No log | 29.96 | 90 | 0.8404 | 0.8 | 0.2990 | 1.1775 | 0.8000 | 0.7848 | 0.1962 | 0.0805 |
| No log | 30.96 | 93 | 0.8842 | 0.8 | 0.3226 | 1.3091 | 0.8000 | 0.7904 | 0.2168 | 0.1020 |
| No log | 31.96 | 96 | 0.8653 | 0.805 | 0.3086 | 1.3926 | 0.805 | 0.7818 | 0.1996 | 0.0853 |
| No log | 32.96 | 99 | 0.8767 | 0.785 | 0.3142 | 1.2268 | 0.785 | 0.7684 | 0.2117 | 0.0739 |
| No log | 33.96 | 102 | 0.9349 | 0.775 | 0.3410 | 1.3988 | 0.775 | 0.7600 | 0.2246 | 0.1024 |
| No log | 34.96 | 105 | 0.8606 | 0.79 | 0.3035 | 1.0902 | 0.79 | 0.7683 | 0.1954 | 0.0830 |
| No log | 35.96 | 108 | 0.8578 | 0.815 | 0.3050 | 1.3418 | 0.815 | 0.7923 | 0.2155 | 0.0923 |
| No log | 36.96 | 111 | 0.8641 | 0.795 | 0.3128 | 1.2449 | 0.795 | 0.7694 | 0.2068 | 0.0878 |
| No log | 37.96 | 114 | 0.8489 | 0.8 | 0.2996 | 1.2505 | 0.8000 | 0.7698 | 0.2027 | 0.0827 |
| No log | 38.96 | 117 | 0.8465 | 0.82 | 0.3011 | 1.3264 | 0.82 | 0.7947 | 0.2033 | 0.0923 |
| No log | 39.96 | 120 | 0.8608 | 0.8 | 0.3051 | 1.3178 | 0.8000 | 0.7706 | 0.2072 | 0.0894 |
| No log | 40.96 | 123 | 0.8592 | 0.8 | 0.3066 | 1.3141 | 0.8000 | 0.7692 | 0.2069 | 0.0909 |
| No log | 41.96 | 126 | 0.8611 | 0.805 | 0.3125 | 1.2988 | 0.805 | 0.7832 | 0.2094 | 0.0791 |
| No log | 42.96 | 129 | 0.8516 | 0.805 | 0.3000 | 1.3221 | 0.805 | 0.7791 | 0.2179 | 0.0884 |
| No log | 43.96 | 132 | 0.8587 | 0.8 | 0.3064 | 1.3414 | 0.8000 | 0.7784 | 0.2056 | 0.0922 |
| No log | 44.96 | 135 | 0.8691 | 0.79 | 0.3181 | 1.3262 | 0.79 | 0.7765 | 0.2153 | 0.0884 |
| No log | 45.96 | 138 | 0.8576 | 0.81 | 0.3066 | 1.1918 | 0.81 | 0.7847 | 0.2182 | 0.1009 |
| No log | 46.96 | 141 | 0.8722 | 0.8 | 0.3152 | 1.4909 | 0.8000 | 0.7798 | 0.2219 | 0.1012 |
| No log | 47.96 | 144 | 0.8399 | 0.81 | 0.3087 | 1.5338 | 0.81 | 0.7849 | 0.2138 | 0.0740 |
| No log | 48.96 | 147 | 0.8393 | 0.805 | 0.3004 | 1.3810 | 0.805 | 0.7819 | 0.2150 | 0.0696 |
| No log | 49.96 | 150 | 0.8899 | 0.78 | 0.3201 | 1.5622 | 0.78 | 0.7644 | 0.2227 | 0.0960 |
| No log | 50.96 | 153 | 0.8954 | 0.78 | 0.3249 | 1.6494 | 0.78 | 0.7654 | 0.2135 | 0.0902 |
| No log | 51.96 | 156 | 0.8259 | 0.79 | 0.2954 | 1.2271 | 0.79 | 0.7707 | 0.2129 | 0.0659 |
| No log | 52.96 | 159 | 0.8806 | 0.795 | 0.3145 | 1.4079 | 0.795 | 0.7759 | 0.2046 | 0.0877 |
| No log | 53.96 | 162 | 0.8842 | 0.81 | 0.3178 | 1.3465 | 0.81 | 0.7925 | 0.2173 | 0.1037 |
| No log | 54.96 | 165 | 0.8741 | 0.8 | 0.3173 | 1.4540 | 0.8000 | 0.7750 | 0.2079 | 0.0819 |
| No log | 55.96 | 168 | 0.8242 | 0.8 | 0.2964 | 1.3053 | 0.8000 | 0.7838 | 0.1972 | 0.0670 |
| No log | 56.96 | 171 | 0.8350 | 0.825 | 0.2962 | 1.2110 | 0.825 | 0.8135 | 0.2126 | 0.0780 |
| No log | 57.96 | 174 | 0.8491 | 0.815 | 0.3034 | 1.3250 | 0.815 | 0.8070 | 0.2116 | 0.0875 |
| No log | 58.96 | 177 | 0.8584 | 0.795 | 0.3119 | 1.3162 | 0.795 | 0.7764 | 0.1956 | 0.0860 |
| No log | 59.96 | 180 | 0.8546 | 0.79 | 0.3115 | 1.3315 | 0.79 | 0.7740 | 0.1855 | 0.0828 |
| No log | 60.96 | 183 | 0.8564 | 0.79 | 0.3068 | 1.3275 | 0.79 | 0.7760 | 0.2008 | 0.0862 |
| No log | 61.96 | 186 | 0.8573 | 0.795 | 0.3068 | 1.3160 | 0.795 | 0.7738 | 0.2117 | 0.0884 |
| No log | 62.96 | 189 | 0.8503 | 0.785 | 0.3088 | 1.3498 | 0.785 | 0.7650 | 0.2069 | 0.0856 |
| No log | 63.96 | 192 | 0.8639 | 0.81 | 0.3111 | 1.2614 | 0.81 | 0.7873 | 0.2247 | 0.0893 |
| No log | 64.96 | 195 | 0.8744 | 0.805 | 0.3128 | 1.3294 | 0.805 | 0.7888 | 0.2096 | 0.0912 |
| No log | 65.96 | 198 | 0.8727 | 0.8 | 0.3138 | 1.4212 | 0.8000 | 0.7903 | 0.2031 | 0.0849 |
| No log | 66.96 | 201 | 0.8612 | 0.79 | 0.3084 | 1.3592 | 0.79 | 0.7702 | 0.1855 | 0.0816 |
| No log | 67.96 | 204 | 0.8576 | 0.79 | 0.3071 | 1.4005 | 0.79 | 0.7667 | 0.1896 | 0.0863 |
| No log | 68.96 | 207 | 0.8540 | 0.805 | 0.3037 | 1.3957 | 0.805 | 0.7775 | 0.2263 | 0.0876 |
| No log | 69.96 | 210 | 0.8499 | 0.81 | 0.2982 | 1.3987 | 0.81 | 0.7874 | 0.2109 | 0.0856 |
| No log | 70.96 | 213 | 0.8465 | 0.815 | 0.3001 | 1.3222 | 0.815 | 0.7901 | 0.2224 | 0.0928 |
| No log | 71.96 | 216 | 0.8541 | 0.81 | 0.3041 | 1.3331 | 0.81 | 0.7827 | 0.2169 | 0.0897 |
| No log | 72.96 | 219 | 0.8546 | 0.795 | 0.3066 | 1.3991 | 0.795 | 0.7720 | 0.2141 | 0.0871 |
| No log | 73.96 | 222 | 0.8569 | 0.79 | 0.3039 | 1.3544 | 0.79 | 0.7672 | 0.1958 | 0.0863 |
| No log | 74.96 | 225 | 0.8622 | 0.805 | 0.3028 | 1.3384 | 0.805 | 0.7847 | 0.1938 | 0.0879 |
| No log | 75.96 | 228 | 0.8610 | 0.805 | 0.3039 | 1.3285 | 0.805 | 0.7810 | 0.2033 | 0.0947 |
| No log | 76.96 | 231 | 0.8581 | 0.81 | 0.3031 | 1.3334 | 0.81 | 0.7840 | 0.1993 | 0.0944 |
| No log | 77.96 | 234 | 0.8607 | 0.8 | 0.3055 | 1.3260 | 0.8000 | 0.7785 | 0.1979 | 0.0899 |
| No log | 78.96 | 237 | 0.8642 | 0.79 | 0.3068 | 1.3928 | 0.79 | 0.7672 | 0.1822 | 0.0869 |
| No log | 79.96 | 240 | 0.8640 | 0.805 | 0.3044 | 1.3311 | 0.805 | 0.7786 | 0.2001 | 0.0916 |
| No log | 80.96 | 243 | 0.8648 | 0.81 | 0.3056 | 1.2812 | 0.81 | 0.7836 | 0.2173 | 0.0955 |
| No log | 81.96 | 246 | 0.8639 | 0.825 | 0.3056 | 1.3295 | 0.825 | 0.8062 | 0.1952 | 0.0913 |
| No log | 82.96 | 249 | 0.8643 | 0.805 | 0.3082 | 1.3334 | 0.805 | 0.7887 | 0.2108 | 0.0881 |
| No log | 83.96 | 252 | 0.8626 | 0.795 | 0.3068 | 1.3334 | 0.795 | 0.7780 | 0.2097 | 0.0845 |
| No log | 84.96 | 255 | 0.8586 | 0.81 | 0.3033 | 1.2646 | 0.81 | 0.7893 | 0.2035 | 0.0808 |
| No log | 85.96 | 258 | 0.8570 | 0.805 | 0.3024 | 1.2694 | 0.805 | 0.7802 | 0.1947 | 0.0811 |
| No log | 86.96 | 261 | 0.8557 | 0.795 | 0.3023 | 1.3261 | 0.795 | 0.7657 | 0.1966 | 0.0828 |
| No log | 87.96 | 264 | 0.8576 | 0.8 | 0.3051 | 1.3283 | 0.8000 | 0.7754 | 0.2072 | 0.0848 |
| No log | 88.96 | 267 | 0.8537 | 0.8 | 0.3083 | 1.3257 | 0.8000 | 0.7771 | 0.2167 | 0.0859 |
| No log | 89.96 | 270 | 0.8591 | 0.795 | 0.3106 | 1.3262 | 0.795 | 0.7737 | 0.2011 | 0.0866 |
| No log | 90.96 | 273 | 0.8612 | 0.785 | 0.3122 | 1.3279 | 0.785 | 0.7594 | 0.1885 | 0.0868 |
| No log | 91.96 | 276 | 0.8571 | 0.795 | 0.3104 | 1.3248 | 0.795 | 0.7667 | 0.1966 | 0.0853 |
| No log | 92.96 | 279 | 0.8560 | 0.795 | 0.3082 | 1.3244 | 0.795 | 0.7667 | 0.2147 | 0.0836 |
| No log | 93.96 | 282 | 0.8551 | 0.8 | 0.3071 | 1.3251 | 0.8000 | 0.7766 | 0.2109 | 0.0830 |
| No log | 94.96 | 285 | 0.8556 | 0.79 | 0.3076 | 1.3264 | 0.79 | 0.7577 | 0.1885 | 0.0834 |
| No log | 95.96 | 288 | 0.8569 | 0.795 | 0.3078 | 1.3280 | 0.795 | 0.7675 | 0.1980 | 0.0840 |
| No log | 96.96 | 291 | 0.8581 | 0.795 | 0.3082 | 1.3290 | 0.795 | 0.7675 | 0.2039 | 0.0842 |
| No log | 97.96 | 294 | 0.8585 | 0.8 | 0.3084 | 1.3300 | 0.8000 | 0.7728 | 0.2137 | 0.0849 |
| No log | 98.96 | 297 | 0.8589 | 0.8 | 0.3083 | 1.3301 | 0.8000 | 0.7728 | 0.2156 | 0.0850 |
| No log | 99.96 | 300 | 0.8584 | 0.8 | 0.3083 | 1.3299 | 0.8000 | 0.7728 | 0.2079 | 0.0851 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ufal/byt5-small-multilexnorm2021-nl | ufal | 2023-07-19T21:57:00Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"lexical normalization",
"nl",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: nl
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Dutch version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
Falcinspire/dqn-SpaceInvadersNoFrameskip-v4 | Falcinspire | 2023-07-19T21:51:27Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-19T21:50:52Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 619.00 +/- 300.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Falcinspire -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Falcinspire -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Falcinspire
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
olegs/whisper-tiny-minds14 | olegs | 2023-07-19T21:45:03Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-16T19:49:53Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34415584415584416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7429
- Wer Ortho: 35.7804
- Wer: 0.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0065 | 14.29 | 200 | 0.6740 | 34.6083 | 0.3241 |
| 0.0009 | 28.57 | 400 | 0.7429 | 35.7804 | 0.3442 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
kweston/finetuning-sentiment-model-3000-samples | kweston | 2023-07-19T21:23:17Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-19T21:16:23Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3182
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/300-tiny_tobacco3482_kd | jordyvl | 2023-07-19T21:22:16Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-19T10:52:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 300-tiny_tobacco3482_kd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 300-tiny_tobacco3482_kd
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3298
- Accuracy: 0.79
- Brier Loss: 0.3334
- Nll: 1.0051
- F1 Micro: 0.79
- F1 Macro: 0.7591
- Ece: 0.2152
- Aurc: 0.0601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 1.7587 | 0.225 | 0.8896 | 7.6699 | 0.225 | 0.1499 | 0.2894 | 0.7588 |
| No log | 2.0 | 26 | 1.1905 | 0.4 | 0.7901 | 3.7933 | 0.4000 | 0.2961 | 0.3234 | 0.4395 |
| No log | 3.0 | 39 | 0.9530 | 0.53 | 0.6670 | 2.9907 | 0.53 | 0.4066 | 0.3137 | 0.2821 |
| No log | 4.0 | 52 | 0.8046 | 0.615 | 0.5821 | 2.2124 | 0.615 | 0.4898 | 0.3097 | 0.1862 |
| No log | 5.0 | 65 | 0.7084 | 0.685 | 0.5084 | 2.1522 | 0.685 | 0.6027 | 0.3191 | 0.1389 |
| No log | 6.0 | 78 | 0.7243 | 0.68 | 0.4683 | 2.2673 | 0.68 | 0.5904 | 0.2695 | 0.1279 |
| No log | 7.0 | 91 | 0.6734 | 0.67 | 0.4675 | 2.2909 | 0.67 | 0.5844 | 0.2436 | 0.1510 |
| No log | 8.0 | 104 | 0.5780 | 0.7 | 0.4215 | 2.0061 | 0.7 | 0.6160 | 0.2418 | 0.1016 |
| No log | 9.0 | 117 | 0.6270 | 0.71 | 0.4402 | 1.8620 | 0.7100 | 0.6574 | 0.2485 | 0.1249 |
| No log | 10.0 | 130 | 0.5604 | 0.72 | 0.4074 | 1.5914 | 0.72 | 0.6430 | 0.2566 | 0.0935 |
| No log | 11.0 | 143 | 0.5814 | 0.705 | 0.4079 | 1.6933 | 0.705 | 0.6190 | 0.2350 | 0.1035 |
| No log | 12.0 | 156 | 0.5901 | 0.71 | 0.4176 | 1.7974 | 0.7100 | 0.6472 | 0.2225 | 0.1058 |
| No log | 13.0 | 169 | 0.5041 | 0.71 | 0.3918 | 1.8429 | 0.7100 | 0.6562 | 0.2336 | 0.0958 |
| No log | 14.0 | 182 | 0.5099 | 0.72 | 0.3982 | 1.6343 | 0.72 | 0.6550 | 0.2202 | 0.1021 |
| No log | 15.0 | 195 | 0.4843 | 0.745 | 0.3951 | 1.3599 | 0.745 | 0.6719 | 0.2680 | 0.0884 |
| No log | 16.0 | 208 | 0.4529 | 0.74 | 0.3776 | 1.3838 | 0.74 | 0.6951 | 0.2112 | 0.0839 |
| No log | 17.0 | 221 | 0.4420 | 0.745 | 0.3782 | 1.4403 | 0.745 | 0.6982 | 0.2285 | 0.0800 |
| No log | 18.0 | 234 | 0.4428 | 0.755 | 0.3710 | 1.3696 | 0.755 | 0.7298 | 0.2170 | 0.0825 |
| No log | 19.0 | 247 | 0.4306 | 0.75 | 0.3794 | 1.4095 | 0.75 | 0.7235 | 0.2470 | 0.0862 |
| No log | 20.0 | 260 | 0.4166 | 0.74 | 0.3648 | 1.2893 | 0.74 | 0.6776 | 0.2312 | 0.0835 |
| No log | 21.0 | 273 | 0.3830 | 0.77 | 0.3524 | 1.1764 | 0.7700 | 0.7256 | 0.2535 | 0.0730 |
| No log | 22.0 | 286 | 0.3918 | 0.77 | 0.3564 | 1.2293 | 0.7700 | 0.7067 | 0.2372 | 0.0710 |
| No log | 23.0 | 299 | 0.4125 | 0.75 | 0.3656 | 1.1419 | 0.75 | 0.7109 | 0.2357 | 0.0766 |
| No log | 24.0 | 312 | 0.3771 | 0.785 | 0.3543 | 1.0960 | 0.785 | 0.7583 | 0.2345 | 0.0712 |
| No log | 25.0 | 325 | 0.3846 | 0.745 | 0.3613 | 1.0616 | 0.745 | 0.7061 | 0.2060 | 0.0766 |
| No log | 26.0 | 338 | 0.3660 | 0.77 | 0.3547 | 1.3094 | 0.7700 | 0.7196 | 0.2515 | 0.0724 |
| No log | 27.0 | 351 | 0.3634 | 0.78 | 0.3476 | 1.0645 | 0.78 | 0.7479 | 0.2401 | 0.0677 |
| No log | 28.0 | 364 | 0.3715 | 0.755 | 0.3522 | 1.1981 | 0.755 | 0.6984 | 0.2257 | 0.0709 |
| No log | 29.0 | 377 | 0.3701 | 0.765 | 0.3597 | 1.1645 | 0.765 | 0.7239 | 0.2631 | 0.0747 |
| No log | 30.0 | 390 | 0.3562 | 0.775 | 0.3465 | 1.1094 | 0.775 | 0.7140 | 0.2428 | 0.0659 |
| No log | 31.0 | 403 | 0.3811 | 0.775 | 0.3499 | 1.2515 | 0.775 | 0.7368 | 0.2214 | 0.0694 |
| No log | 32.0 | 416 | 0.3555 | 0.77 | 0.3439 | 1.1715 | 0.7700 | 0.7053 | 0.2532 | 0.0705 |
| No log | 33.0 | 429 | 0.3592 | 0.775 | 0.3449 | 1.1606 | 0.775 | 0.7364 | 0.2336 | 0.0729 |
| No log | 34.0 | 442 | 0.3555 | 0.78 | 0.3431 | 1.1054 | 0.78 | 0.7373 | 0.2143 | 0.0653 |
| No log | 35.0 | 455 | 0.3454 | 0.77 | 0.3415 | 1.0386 | 0.7700 | 0.7333 | 0.2463 | 0.0668 |
| No log | 36.0 | 468 | 0.3403 | 0.8 | 0.3394 | 1.1435 | 0.8000 | 0.7664 | 0.2674 | 0.0625 |
| No log | 37.0 | 481 | 0.3390 | 0.785 | 0.3379 | 1.1183 | 0.785 | 0.7552 | 0.2432 | 0.0633 |
| No log | 38.0 | 494 | 0.3413 | 0.79 | 0.3347 | 1.1538 | 0.79 | 0.7406 | 0.2239 | 0.0615 |
| 0.2994 | 39.0 | 507 | 0.3364 | 0.795 | 0.3362 | 0.9975 | 0.795 | 0.7650 | 0.2334 | 0.0639 |
| 0.2994 | 40.0 | 520 | 0.3340 | 0.79 | 0.3328 | 1.0045 | 0.79 | 0.7466 | 0.2711 | 0.0580 |
| 0.2994 | 41.0 | 533 | 0.3381 | 0.77 | 0.3391 | 0.9829 | 0.7700 | 0.7427 | 0.2147 | 0.0675 |
| 0.2994 | 42.0 | 546 | 0.3297 | 0.8 | 0.3319 | 1.0739 | 0.8000 | 0.7685 | 0.2613 | 0.0585 |
| 0.2994 | 43.0 | 559 | 0.3338 | 0.8 | 0.3373 | 1.1507 | 0.8000 | 0.7719 | 0.2491 | 0.0637 |
| 0.2994 | 44.0 | 572 | 0.3316 | 0.79 | 0.3359 | 1.1274 | 0.79 | 0.7539 | 0.2469 | 0.0620 |
| 0.2994 | 45.0 | 585 | 0.3283 | 0.79 | 0.3336 | 1.0644 | 0.79 | 0.7531 | 0.2636 | 0.0612 |
| 0.2994 | 46.0 | 598 | 0.3297 | 0.8 | 0.3344 | 1.1343 | 0.8000 | 0.7670 | 0.2317 | 0.0600 |
| 0.2994 | 47.0 | 611 | 0.3293 | 0.79 | 0.3318 | 1.0692 | 0.79 | 0.7542 | 0.2396 | 0.0616 |
| 0.2994 | 48.0 | 624 | 0.3339 | 0.79 | 0.3357 | 1.1225 | 0.79 | 0.7590 | 0.2508 | 0.0617 |
| 0.2994 | 49.0 | 637 | 0.3290 | 0.795 | 0.3343 | 1.0692 | 0.795 | 0.7618 | 0.2529 | 0.0604 |
| 0.2994 | 50.0 | 650 | 0.3298 | 0.79 | 0.3348 | 1.1343 | 0.79 | 0.7591 | 0.2330 | 0.0609 |
| 0.2994 | 51.0 | 663 | 0.3305 | 0.795 | 0.3330 | 1.0045 | 0.795 | 0.7618 | 0.2357 | 0.0607 |
| 0.2994 | 52.0 | 676 | 0.3299 | 0.79 | 0.3339 | 1.0722 | 0.79 | 0.7542 | 0.2562 | 0.0614 |
| 0.2994 | 53.0 | 689 | 0.3280 | 0.8 | 0.3325 | 1.0688 | 0.8000 | 0.7685 | 0.2500 | 0.0593 |
| 0.2994 | 54.0 | 702 | 0.3284 | 0.795 | 0.3323 | 1.0175 | 0.795 | 0.7618 | 0.2436 | 0.0598 |
| 0.2994 | 55.0 | 715 | 0.3287 | 0.79 | 0.3331 | 1.0750 | 0.79 | 0.7591 | 0.2497 | 0.0604 |
| 0.2994 | 56.0 | 728 | 0.3286 | 0.795 | 0.3335 | 1.0115 | 0.795 | 0.7618 | 0.2296 | 0.0602 |
| 0.2994 | 57.0 | 741 | 0.3285 | 0.79 | 0.3330 | 1.0648 | 0.79 | 0.7591 | 0.2446 | 0.0602 |
| 0.2994 | 58.0 | 754 | 0.3299 | 0.795 | 0.3339 | 1.0193 | 0.795 | 0.7618 | 0.2345 | 0.0608 |
| 0.2994 | 59.0 | 767 | 0.3294 | 0.79 | 0.3329 | 1.0139 | 0.79 | 0.7591 | 0.2369 | 0.0601 |
| 0.2994 | 60.0 | 780 | 0.3292 | 0.795 | 0.3332 | 1.0118 | 0.795 | 0.7618 | 0.2226 | 0.0601 |
| 0.2994 | 61.0 | 793 | 0.3293 | 0.795 | 0.3333 | 1.0716 | 0.795 | 0.7618 | 0.2282 | 0.0602 |
| 0.2994 | 62.0 | 806 | 0.3294 | 0.795 | 0.3331 | 1.0107 | 0.795 | 0.7618 | 0.2224 | 0.0601 |
| 0.2994 | 63.0 | 819 | 0.3295 | 0.795 | 0.3336 | 1.0144 | 0.795 | 0.7618 | 0.2294 | 0.0605 |
| 0.2994 | 64.0 | 832 | 0.3293 | 0.795 | 0.3332 | 1.0104 | 0.795 | 0.7618 | 0.2324 | 0.0603 |
| 0.2994 | 65.0 | 845 | 0.3298 | 0.795 | 0.3337 | 1.0114 | 0.795 | 0.7618 | 0.2478 | 0.0606 |
| 0.2994 | 66.0 | 858 | 0.3297 | 0.795 | 0.3333 | 1.0076 | 0.795 | 0.7618 | 0.2366 | 0.0601 |
| 0.2994 | 67.0 | 871 | 0.3298 | 0.79 | 0.3338 | 1.0120 | 0.79 | 0.7591 | 0.2513 | 0.0606 |
| 0.2994 | 68.0 | 884 | 0.3297 | 0.795 | 0.3337 | 1.0110 | 0.795 | 0.7618 | 0.2376 | 0.0605 |
| 0.2994 | 69.0 | 897 | 0.3297 | 0.795 | 0.3335 | 1.0115 | 0.795 | 0.7618 | 0.2228 | 0.0602 |
| 0.2994 | 70.0 | 910 | 0.3292 | 0.795 | 0.3333 | 1.0089 | 0.795 | 0.7618 | 0.2215 | 0.0602 |
| 0.2994 | 71.0 | 923 | 0.3297 | 0.795 | 0.3334 | 1.0083 | 0.795 | 0.7618 | 0.2226 | 0.0600 |
| 0.2994 | 72.0 | 936 | 0.3297 | 0.79 | 0.3335 | 1.0072 | 0.79 | 0.7591 | 0.2257 | 0.0604 |
| 0.2994 | 73.0 | 949 | 0.3297 | 0.795 | 0.3332 | 1.0060 | 0.795 | 0.7618 | 0.2381 | 0.0600 |
| 0.2994 | 74.0 | 962 | 0.3295 | 0.795 | 0.3335 | 1.0082 | 0.795 | 0.7618 | 0.2366 | 0.0603 |
| 0.2994 | 75.0 | 975 | 0.3296 | 0.79 | 0.3334 | 1.0089 | 0.79 | 0.7591 | 0.2373 | 0.0601 |
| 0.2994 | 76.0 | 988 | 0.3298 | 0.795 | 0.3334 | 1.0098 | 0.795 | 0.7618 | 0.2310 | 0.0602 |
| 0.0006 | 77.0 | 1001 | 0.3297 | 0.79 | 0.3334 | 1.0084 | 0.79 | 0.7591 | 0.2228 | 0.0603 |
| 0.0006 | 78.0 | 1014 | 0.3297 | 0.79 | 0.3333 | 1.0071 | 0.79 | 0.7591 | 0.2148 | 0.0600 |
| 0.0006 | 79.0 | 1027 | 0.3298 | 0.795 | 0.3334 | 1.0059 | 0.795 | 0.7618 | 0.2309 | 0.0602 |
| 0.0006 | 80.0 | 1040 | 0.3298 | 0.795 | 0.3334 | 1.0046 | 0.795 | 0.7618 | 0.2309 | 0.0602 |
| 0.0006 | 81.0 | 1053 | 0.3298 | 0.79 | 0.3335 | 1.0073 | 0.79 | 0.7591 | 0.2239 | 0.0602 |
| 0.0006 | 82.0 | 1066 | 0.3298 | 0.795 | 0.3336 | 1.0072 | 0.795 | 0.7618 | 0.2317 | 0.0603 |
| 0.0006 | 83.0 | 1079 | 0.3297 | 0.795 | 0.3334 | 1.0055 | 0.795 | 0.7618 | 0.2224 | 0.0601 |
| 0.0006 | 84.0 | 1092 | 0.3298 | 0.79 | 0.3335 | 1.0061 | 0.79 | 0.7591 | 0.2240 | 0.0601 |
| 0.0006 | 85.0 | 1105 | 0.3297 | 0.79 | 0.3334 | 1.0052 | 0.79 | 0.7591 | 0.2322 | 0.0601 |
| 0.0006 | 86.0 | 1118 | 0.3298 | 0.79 | 0.3335 | 1.0059 | 0.79 | 0.7591 | 0.2323 | 0.0602 |
| 0.0006 | 87.0 | 1131 | 0.3298 | 0.79 | 0.3335 | 1.0065 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 88.0 | 1144 | 0.3298 | 0.79 | 0.3335 | 1.0056 | 0.79 | 0.7591 | 0.2235 | 0.0603 |
| 0.0006 | 89.0 | 1157 | 0.3297 | 0.79 | 0.3334 | 1.0050 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 90.0 | 1170 | 0.3297 | 0.79 | 0.3334 | 1.0049 | 0.79 | 0.7591 | 0.2153 | 0.0602 |
| 0.0006 | 91.0 | 1183 | 0.3297 | 0.79 | 0.3334 | 1.0059 | 0.79 | 0.7591 | 0.2234 | 0.0601 |
| 0.0006 | 92.0 | 1196 | 0.3298 | 0.79 | 0.3334 | 1.0049 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 93.0 | 1209 | 0.3299 | 0.79 | 0.3335 | 1.0056 | 0.79 | 0.7591 | 0.2152 | 0.0601 |
| 0.0006 | 94.0 | 1222 | 0.3298 | 0.79 | 0.3335 | 1.0049 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 95.0 | 1235 | 0.3298 | 0.79 | 0.3334 | 1.0048 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 96.0 | 1248 | 0.3298 | 0.79 | 0.3334 | 1.0050 | 0.79 | 0.7591 | 0.2152 | 0.0601 |
| 0.0006 | 97.0 | 1261 | 0.3298 | 0.79 | 0.3335 | 1.0053 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 98.0 | 1274 | 0.3298 | 0.79 | 0.3334 | 1.0051 | 0.79 | 0.7591 | 0.2152 | 0.0602 |
| 0.0006 | 99.0 | 1287 | 0.3298 | 0.79 | 0.3334 | 1.0052 | 0.79 | 0.7591 | 0.2152 | 0.0601 |
| 0.0006 | 100.0 | 1300 | 0.3298 | 0.79 | 0.3334 | 1.0051 | 0.79 | 0.7591 | 0.2152 | 0.0601 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Corran/test2 | Corran | 2023-07-19T20:58:54Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-19T20:58:17Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Corran/test2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Corran/test2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Oslaw/poca-SoccerTwos | Oslaw | 2023-07-19T20:57:53Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-07-19T20:57:24Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Oslaw/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
styraist/turkishReview-ds-mini | styraist | 2023-07-19T20:52:35Z | 61 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-17T15:09:25Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: turkishReview-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReview-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fedbor/sesto_modello | fedbor | 2023-07-19T20:51:52Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-19T20:51:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Subsets and Splits