modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 06:25:54
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Yntec/C-.-_-.-Aravaggio | Yntec | 2023-12-21T01:37:47Z | 253 | 4 | diffusers | [
"diffusers",
"safetensors",
"Base model",
"General",
"Everything",
"Redigleb_Doppler2482",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-21T01:09:00Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Base model
- General
- Everything
- Redigleb_Doppler2482
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# C++AravaggioV0.9 - an Answer to both Dall-E and Kandinsky 2.1
Original page: https://civitai.com/models/93155/caravaggiov09-an-answer-to-both-dall-e-and-kandinsky-21?modelVersionId=99323
Samples and prompts:

Top left: Anime fine details portrait of joyful cute little girl lay school class room, bokeh. anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of hayao miyazaki. Wikipedia. hugging. OIL PAINTING. DOCTOR with short hair in coat BEAUTIFUL girl eyes. she has pigtails
Top right: House with a waterwheel built into the roots of a giant tree, next to games, a colorful river landscape painting from a fantasy point and click 2 d graphic adventure game, art inspired by ROSSDRAWS and larry elmore and john shroades, king's quest, sierra entertainment
Bottom left: An underwater world with vibrant coral reefs and schools of colorful fish. The artistic style is pop art, with bold and bright colors and graphic shapes. The light setting is filtered through the water, creating a surreal and dreamy effect. The mood of the image is energetic and lively, capturing the movement and vitality of the underwater environment.
Bottom right: pretty young girl riding bike down the ocean streets of japan, teddy bear hour |
meta-math/MetaMath-7B-V1.0 | meta-math | 2023-12-21T01:37:17Z | 25,981 | 26 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:meta-math/MetaMathQA",
"arxiv:2309.12284",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-21T08:33:54Z | ---
license: llama2
datasets:
- meta-math/MetaMathQA
---
see our paper in https://arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Llemma-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Llemma-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Llemma-7B can boost the MATH performance from 19.8 to **30.0**.
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Llemma-7B** | **69.2** | **30.0** |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
## Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` |
meta-math/MetaMath-13B-V1.0 | meta-math | 2023-12-21T01:36:24Z | 1,467 | 13 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:meta-math/MetaMathQA",
"arxiv:2309.12284",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-22T02:37:11Z | ---
license: llama2
datasets:
- meta-math/MetaMathQA
---
see our paper in https://arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Llemma-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Llemma-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Llemma-7B can boost the MATH performance from 19.8 to **30.0**.
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Llemma-7B** | **69.2** | **30.0** |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
## Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` |
LarryAIDraw/sumi-kanokari-01 | LarryAIDraw | 2023-12-21T01:30:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T01:26:43Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/228737/sumi-sakurasawa-kanojo-okarishimasu |
LarryAIDraw/mami-kanokari-01 | LarryAIDraw | 2023-12-21T01:30:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T01:26:20Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/228734/mami-nanami-kanojo-okarishimasu |
LarryAIDraw/ruka-kanokari-01 | LarryAIDraw | 2023-12-21T01:29:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T01:25:56Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/228738/ruka-sarashina-kanojo-okarishimasu |
vkamal/falcon_7b_instruct_trained_mental_health_lora | vkamal | 2023-12-21T01:22:44Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"region:us"
] | null | 2023-12-21T01:22:27Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: tiiuae/falcon-7b-instruct
model-index:
- name: falcon_7b_instruct_trained_mental_health_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_7b_instruct_trained_mental_health_lora
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 600
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
vkamal/falcon_fine_tuned_7b_instruct | vkamal | 2023-12-21T01:22:27Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"region:us"
] | null | 2023-12-21T01:22:24Z | ---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
afrideva/phi2-oasst-guanaco-bf16-custom-GGUF | afrideva | 2023-12-21T01:17:30Z | 26 | 0 | null | [
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:anilbhatt1/phi2-oasst-guanaco-bf16-custom",
"base_model:quantized:anilbhatt1/phi2-oasst-guanaco-bf16-custom",
"license:mit",
"region:us"
] | text-generation | 2023-12-21T01:09:04Z | ---
base_model: anilbhatt1/phi2-oasst-guanaco-bf16-custom
inference: false
license: mit
model_creator: anilbhatt1
model_name: phi2-oasst-guanaco-bf16-custom
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# anilbhatt1/phi2-oasst-guanaco-bf16-custom-GGUF
Quantized GGUF model files for [phi2-oasst-guanaco-bf16-custom](https://huggingface.co/anilbhatt1/phi2-oasst-guanaco-bf16-custom) from [anilbhatt1](https://huggingface.co/anilbhatt1)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi2-oasst-guanaco-bf16-custom.fp16.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.fp16.gguf) | fp16 | 5.56 GB |
| [phi2-oasst-guanaco-bf16-custom.q2_k.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.q2_k.gguf) | q2_k | 1.17 GB |
| [phi2-oasst-guanaco-bf16-custom.q3_k_m.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.q3_k_m.gguf) | q3_k_m | 1.48 GB |
| [phi2-oasst-guanaco-bf16-custom.q4_k_m.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.q4_k_m.gguf) | q4_k_m | 1.79 GB |
| [phi2-oasst-guanaco-bf16-custom.q5_k_m.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.q5_k_m.gguf) | q5_k_m | 2.07 GB |
| [phi2-oasst-guanaco-bf16-custom.q6_k.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.q6_k.gguf) | q6_k | 2.29 GB |
| [phi2-oasst-guanaco-bf16-custom.q8_0.gguf](https://huggingface.co/afrideva/phi2-oasst-guanaco-bf16-custom-GGUF/resolve/main/phi2-oasst-guanaco-bf16-custom.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
# Finetuned microsoft-phi2 model
- microsoft-phi2 model finetuned on "timdettmers/openassistant-guanaco" dataset with qlora technique
- Will run on a colab T4 gpu |
city96/AnimeClassifiers | city96 | 2023-12-21T00:57:40Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-11-29T14:59:53Z | ---
license: apache-2.0
---
# Anime Classifiers
[Training/inference code](https://github.com/city96/CityClassifiers) | [Live Demo](https://huggingface.co/spaces/city96/AnimeClassifiers-demo)
These are models that predict whether a concept is present in an image. The performance on high resolution images isn't very good, especially when detecting subtle image effects such as noise. This is due to CLIP using a fairly low resolution (336x336/224x224).
To combat this, tiling is used at inference time. The input image is first downscaled to 1536 (shortest edge - See `TF.functional.resize`), then 5 separate 512x512 areas are selected (4 corners + center - See `TF.functional.five_crop`). This helps as the downscale factor isn't nearly as drastic as passing the entire image to CLIP. As a bonus, it also avoids the issues with odd aspect ratios requiring cropping or letterboxing to work.

As for the training, it will be detailed in the sections below for the individual classifiers. At first, specialized models will be trained to a relatively high accuracy, building up a high quality but specific dataset in the process.
Then, these models will be used to split/sort each other's the datasets. The code will need to be updated to support one image being part of more than one class, but the final result should be a clean dataset where each target aspect acts as a "tag" rather than a class.
## Architecture
The base model itself is fairly simple. It takes embeddings from a CLIP model (in this case, `openai/clip-vit-large-patch14`) and expands them to 1024 dimensions. From there, a single block with residuals is followed by a few linear layers which converge down to the final output.
For the classifier models, the final output goes through `nn.Softmax`.
# Models
## Chromatic Aberration - Anime
### Design goals
The goal was to detect [chromatic aberration](https://en.wikipedia.org/wiki/Chromatic_aberration?useskin=vector) in images.
For some odd reason, this effect has become a popular post processing effect to apply to images and drawings. While attempting to train an ESRGAN model, I noticed an odd halo around images and quickly figured out that this effect was the cause. This classifier aims to work as a base filter to remove such images from the dataset.
### Issues
- Seems to get confused by excessive HSV noise
- Triggers even if the effect is only applied to the background
- Sometimes triggers on rough linework/sketches (i.e. multiple semi-transparent lines overlapping)
- Low accuracy on 3D/2.5D with possible false positives.
### Training
The training settings can be found in the `config/CCAnime-ChromaticAberration-v1.yaml` file (7e-6 LR, cosine scheduler, 100K steps).


Final dataset score distribution for v1.16:
```
3215 images in dataset.
0_reg - 395 ||||
0_reg_booru - 1805 ||||||||||||||||||||||
1_chroma - 515 ||||||
1_synthetic - 500 ||||||
Class ratios:
00 - 2200 |||||||||||||||||||||||||||
01 - 1015 ||||||||||||
```
Version history:
- v1.0 - Initial test model, dataset is fully synthetic (500 images). Effect added by shifting red/blue channel by a random amount using chaiNNer.
- v1.1 - Added 300 images tagged "chromatic_aberration" from gelbooru. Added first 1000 images from danbooru2021 as reg images
- v1.2 - Used the newly trained predictor to filter the existing datasets - found ~70 positives in the reg set and ~30 false positives in the target set.
- v1.3-v1.16 - Repeatedly ran predictor against various datasets, adding false positives/negatives back into the dataset, sometimes running against the training set to filter out misclassified images as the predictor got better. Added/removed images were manually checked (My eyes hurt).
## Image Compression - Anime
### Design goals
The goal was to detect [compression artifacts](https://en.wikipedia.org/wiki/Compression_artifact?useskin=vector) in images.
This seems like the next logical step in dataset filtering. The flagged images can either be cleaned up or tagged correctly so the resulting network won't inherit the image artifacts.
### Issues
- Low accuracy on 3D/2.5D with possible false positives.
### Training
The training settings can be found in the `config/CCAnime-Compression-v1.yaml` file (2.7e-6 LR, cosine scheduler, 40K steps).

The eval loss only uses a single image for each target class, hence the questionable nature of the graph.

Final dataset score distribution for v1.5:
```
22736 images in dataset.
0_fpl - 108
0_reg_aes - 142
0_reg_gel - 7445 |||||||||||||
1_aes_jpg - 103
1_fpl - 8
1_syn_gel - 7445 |||||||||||||
1_syn_jpg - 40
2_syn_gel - 7445 |||||||||||||
2_syn_webp - 0
Class ratios:
00 - 7695 |||||||||||||
01 - 7596 |||||||||||||
02 - 7445 |||||||||||||
```
Version history:
- v1.0 - Initial test model, dataset consists of 40 hand picked images and their jpeg compressed counterpart. Compression is done with ChaiNNer, compression rate is randomized.
- v1.1 - Added more images by re-filtering the input dataset using the v1 model, keeping only the top/bottom 10%.
- v1.2 - Used the newly trained predictor to filter the existing datasets - found ~70 positives in the reg set and ~30 false positives in the target set.
- v1.3 - Scraped ~7500 images from gelbooru, filtering for min. image size of at least 3000 and a file size larger than 8MB. Compressed using ChaiNNer as before.
- v1.4 - Added webm compression to the list, decided against adding GIF/dithering since it's rarely used nowadays.
- v1.5 - Changed LR/step count to better match larger dataset. Added false positives/negatives from v1.4.
|
lorenzreyes/q-FrozenLake-v1-4x4-noSlippery | lorenzreyes | 2023-12-21T00:55:44Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-21T00:55:42Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="lorenzreyes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
LarryAIDraw/bofuri_misery_s2-lora-nochekaiser | LarryAIDraw | 2023-12-21T00:53:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:52:27Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/236687/misery-bofuri-i-dont-want-to-get-hurt-so-ill-max-out-my-defense |
LarryAIDraw/kochou_wang_3 | LarryAIDraw | 2023-12-21T00:51:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:47:32Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/237486/kochou-wang-boarding-school-juliet |
LarryAIDraw/inui_sajuna | LarryAIDraw | 2023-12-21T00:51:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:47:10Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/237276/inui-sajuna-my-dress-up-darling |
LarryAIDraw/miyazono_kawori | LarryAIDraw | 2023-12-21T00:51:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:46:48Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/237090/miyazono-kawori-your-lie-in-april |
LarryAIDraw/HitomiWashioR4 | LarryAIDraw | 2023-12-21T00:50:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:46:02Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/237596/hitomi-washio-akebi-chan-no-sailor-fuku |
LarryAIDraw/liselotte-sd15-6-000007 | LarryAIDraw | 2023-12-21T00:50:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:45:36Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/22379/liselotte-cretia-seirei-gensouki |
LarryAIDraw/Ilulu-10 | LarryAIDraw | 2023-12-21T00:50:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:45:11Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/238018/ilulu-anime-character-miss-kobayashis-dragon-maid-or |
LarryAIDraw/sphsrKafker-09 | LarryAIDraw | 2023-12-21T00:49:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:44:45Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/237969/kafka-from-honkai-star-rail-lora |
LarryAIDraw/Char-Honkai-Bronya-V2 | LarryAIDraw | 2023-12-21T00:42:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:35:31Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/130516/bronya-zaychik-silverwing-4in1 |
csNoHug/distilbert-base-uncased-finetuned-ner-cadec-no-iob | csNoHug | 2023-12-21T00:42:37Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-21T00:24:20Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner-cadec-no-iob
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-cadec-no-iob
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
- Precision: 0.5826
- Recall: 0.6187
- F1: 0.6001
- Accuracy: 0.9303
- Adr Precision: 0.5109
- Adr Recall: 0.5773
- Adr F1: 0.5421
- Disease Precision: 0.4643
- Disease Recall: 0.4062
- Disease F1: 0.4333
- Drug Precision: 0.8743
- Drug Recall: 0.8889
- Drug F1: 0.8815
- Finding Precision: 0.2143
- Finding Recall: 0.1875
- Finding F1: 0.2000
- Symptom Precision: 0.5556
- Symptom Recall: 0.3448
- Symptom F1: 0.4255
- Macro Avg F1: 0.4965
- Weighted Avg F1: 0.5992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:------------:|:---------------:|
| No log | 1.0 | 125 | 0.2402 | 0.4952 | 0.5462 | 0.5194 | 0.9140 | 0.3893 | 0.5258 | 0.4474 | 0.0 | 0.0 | 0.0 | 0.8883 | 0.8833 | 0.8858 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2666 | 0.4966 |
| No log | 2.0 | 250 | 0.2136 | 0.5380 | 0.5976 | 0.5663 | 0.9239 | 0.4412 | 0.5649 | 0.4955 | 0.6818 | 0.4688 | 0.5556 | 0.8503 | 0.8833 | 0.8665 | 0.2857 | 0.0625 | 0.1026 | 0.6 | 0.1034 | 0.1765 | 0.4393 | 0.5573 |
| No log | 3.0 | 375 | 0.2199 | 0.5283 | 0.5660 | 0.5465 | 0.9191 | 0.4492 | 0.5010 | 0.4737 | 0.5455 | 0.375 | 0.4444 | 0.8674 | 0.8722 | 0.8698 | 0.1515 | 0.1562 | 0.1538 | 0.3429 | 0.4138 | 0.375 | 0.4634 | 0.5492 |
| 0.2232 | 4.0 | 500 | 0.2292 | 0.5622 | 0.5726 | 0.5673 | 0.9228 | 0.4971 | 0.5278 | 0.512 | 0.0 | 0.0 | 0.0 | 0.8791 | 0.8889 | 0.8840 | 0.1852 | 0.3125 | 0.2326 | 0.4211 | 0.2759 | 0.3333 | 0.3924 | 0.5601 |
| 0.2232 | 5.0 | 625 | 0.2474 | 0.5863 | 0.6095 | 0.5977 | 0.9265 | 0.5055 | 0.5732 | 0.5372 | 0.6 | 0.375 | 0.4615 | 0.8785 | 0.8833 | 0.8809 | 0.2273 | 0.1562 | 0.1852 | 0.5333 | 0.2759 | 0.3636 | 0.4857 | 0.5941 |
| 0.2232 | 6.0 | 750 | 0.2474 | 0.5635 | 0.5910 | 0.5769 | 0.9244 | 0.4842 | 0.5381 | 0.5098 | 0.375 | 0.375 | 0.375 | 0.8840 | 0.8889 | 0.8864 | 0.16 | 0.125 | 0.1404 | 0.6111 | 0.3793 | 0.4681 | 0.4759 | 0.5763 |
| 0.2232 | 7.0 | 875 | 0.2709 | 0.5758 | 0.5963 | 0.5859 | 0.9275 | 0.4991 | 0.5423 | 0.5198 | 0.5 | 0.2812 | 0.36 | 0.8710 | 0.9 | 0.8852 | 0.2683 | 0.3438 | 0.3014 | 0.5385 | 0.2414 | 0.3333 | 0.4799 | 0.5835 |
| 0.0707 | 8.0 | 1000 | 0.2611 | 0.5752 | 0.6003 | 0.5875 | 0.9282 | 0.4991 | 0.5485 | 0.5226 | 0.6923 | 0.2812 | 0.4 | 0.8689 | 0.8833 | 0.8760 | 0.2895 | 0.3438 | 0.3143 | 0.4167 | 0.3448 | 0.3774 | 0.4981 | 0.5870 |
| 0.0707 | 9.0 | 1125 | 0.2664 | 0.5749 | 0.6227 | 0.5978 | 0.9297 | 0.5111 | 0.5711 | 0.5394 | 0.4375 | 0.4375 | 0.4375 | 0.8710 | 0.9 | 0.8852 | 0.2093 | 0.2812 | 0.24 | 0.5556 | 0.3448 | 0.4255 | 0.5055 | 0.6003 |
| 0.0707 | 10.0 | 1250 | 0.3066 | 0.5537 | 0.5778 | 0.5655 | 0.9268 | 0.4761 | 0.5134 | 0.4940 | 0.4545 | 0.3125 | 0.3704 | 0.8610 | 0.8944 | 0.8774 | 0.25 | 0.2812 | 0.2647 | 0.3913 | 0.3103 | 0.3462 | 0.4705 | 0.5645 |
| 0.0707 | 11.0 | 1375 | 0.2980 | 0.5751 | 0.5910 | 0.5830 | 0.9282 | 0.4971 | 0.5340 | 0.5149 | 0.4615 | 0.375 | 0.4138 | 0.8602 | 0.8889 | 0.8743 | 0.2917 | 0.2188 | 0.25 | 0.4545 | 0.3448 | 0.3922 | 0.4890 | 0.5801 |
| 0.0293 | 12.0 | 1500 | 0.3272 | 0.5932 | 0.6174 | 0.6050 | 0.9303 | 0.5082 | 0.5732 | 0.5388 | 0.6316 | 0.375 | 0.4706 | 0.8901 | 0.9 | 0.8950 | 0.2609 | 0.1875 | 0.2182 | 0.5556 | 0.3448 | 0.4255 | 0.5096 | 0.6026 |
| 0.0293 | 13.0 | 1625 | 0.3161 | 0.5664 | 0.6187 | 0.5914 | 0.9288 | 0.4937 | 0.5691 | 0.5287 | 0.3846 | 0.4688 | 0.4225 | 0.8804 | 0.9 | 0.8901 | 0.2308 | 0.1875 | 0.2069 | 0.5 | 0.3448 | 0.4082 | 0.4913 | 0.5919 |
| 0.0293 | 14.0 | 1750 | 0.3529 | 0.5736 | 0.6016 | 0.5873 | 0.9269 | 0.4806 | 0.5361 | 0.5068 | 0.5652 | 0.4062 | 0.4727 | 0.8913 | 0.9111 | 0.9011 | 0.3077 | 0.25 | 0.2759 | 0.5238 | 0.3793 | 0.44 | 0.5193 | 0.5867 |
| 0.0293 | 15.0 | 1875 | 0.3381 | 0.5608 | 0.6082 | 0.5835 | 0.9290 | 0.5074 | 0.5649 | 0.5346 | 0.2857 | 0.1875 | 0.2264 | 0.8757 | 0.9 | 0.8877 | 0.1731 | 0.2812 | 0.2143 | 0.4167 | 0.3448 | 0.3774 | 0.4481 | 0.5859 |
| 0.0133 | 16.0 | 2000 | 0.3275 | 0.5833 | 0.6187 | 0.6005 | 0.9307 | 0.5064 | 0.5711 | 0.5368 | 0.4286 | 0.375 | 0.4000 | 0.8852 | 0.9 | 0.8926 | 0.2759 | 0.25 | 0.2623 | 0.5882 | 0.3448 | 0.4348 | 0.5053 | 0.6000 |
| 0.0133 | 17.0 | 2125 | 0.3623 | 0.5787 | 0.6161 | 0.5968 | 0.9310 | 0.4928 | 0.5649 | 0.5264 | 0.6 | 0.4688 | 0.5263 | 0.8852 | 0.9 | 0.8926 | 0.24 | 0.1875 | 0.2105 | 0.5556 | 0.3448 | 0.4255 | 0.5163 | 0.5962 |
| 0.0133 | 18.0 | 2250 | 0.3466 | 0.5699 | 0.6187 | 0.5933 | 0.9299 | 0.4937 | 0.5691 | 0.5287 | 0.3889 | 0.4375 | 0.4118 | 0.8901 | 0.9 | 0.8950 | 0.25 | 0.2188 | 0.2333 | 0.5556 | 0.3448 | 0.4255 | 0.4989 | 0.5944 |
| 0.0133 | 19.0 | 2375 | 0.3496 | 0.5751 | 0.6214 | 0.5973 | 0.9321 | 0.5101 | 0.5753 | 0.5407 | 0.4 | 0.375 | 0.3871 | 0.8798 | 0.8944 | 0.8871 | 0.1860 | 0.25 | 0.2133 | 0.6875 | 0.3793 | 0.4889 | 0.5034 | 0.6007 |
| 0.0075 | 20.0 | 2500 | 0.3676 | 0.5898 | 0.6280 | 0.6083 | 0.9314 | 0.5090 | 0.5814 | 0.5428 | 0.5185 | 0.4375 | 0.4746 | 0.8852 | 0.9 | 0.8926 | 0.2692 | 0.2188 | 0.2414 | 0.6471 | 0.3793 | 0.4783 | 0.5259 | 0.6078 |
| 0.0075 | 21.0 | 2625 | 0.3658 | 0.5816 | 0.6253 | 0.6027 | 0.9306 | 0.4991 | 0.5753 | 0.5345 | 0.5185 | 0.4375 | 0.4746 | 0.8811 | 0.9056 | 0.8932 | 0.2593 | 0.2188 | 0.2373 | 0.6471 | 0.3793 | 0.4783 | 0.5236 | 0.6024 |
| 0.0075 | 22.0 | 2750 | 0.3803 | 0.5804 | 0.6187 | 0.5990 | 0.9294 | 0.5148 | 0.5753 | 0.5433 | 0.3846 | 0.3125 | 0.3448 | 0.8859 | 0.9056 | 0.8956 | 0.2059 | 0.2188 | 0.2121 | 0.4545 | 0.3448 | 0.3922 | 0.4776 | 0.5988 |
| 0.0075 | 23.0 | 2875 | 0.3795 | 0.5954 | 0.6174 | 0.6062 | 0.9305 | 0.5139 | 0.5711 | 0.5410 | 0.5652 | 0.4062 | 0.4727 | 0.8852 | 0.9 | 0.8926 | 0.2609 | 0.1875 | 0.2182 | 0.5556 | 0.3448 | 0.4255 | 0.5100 | 0.6036 |
| 0.0051 | 24.0 | 3000 | 0.3849 | 0.5774 | 0.6148 | 0.5955 | 0.9295 | 0.5093 | 0.5670 | 0.5366 | 0.4444 | 0.375 | 0.4068 | 0.8798 | 0.8944 | 0.8871 | 0.2121 | 0.2188 | 0.2154 | 0.4583 | 0.3793 | 0.4151 | 0.4922 | 0.5961 |
| 0.0051 | 25.0 | 3125 | 0.3847 | 0.5911 | 0.6293 | 0.6096 | 0.9303 | 0.5247 | 0.5918 | 0.5562 | 0.4828 | 0.4375 | 0.4590 | 0.875 | 0.8944 | 0.8846 | 0.1724 | 0.1562 | 0.1639 | 0.5556 | 0.3448 | 0.4255 | 0.4979 | 0.6085 |
| 0.0051 | 26.0 | 3250 | 0.3917 | 0.5901 | 0.6266 | 0.6078 | 0.9298 | 0.5165 | 0.5794 | 0.5462 | 0.4667 | 0.4375 | 0.4516 | 0.8804 | 0.9 | 0.8901 | 0.2759 | 0.25 | 0.2623 | 0.5556 | 0.3448 | 0.4255 | 0.5151 | 0.6072 |
| 0.0051 | 27.0 | 3375 | 0.3915 | 0.5901 | 0.6306 | 0.6097 | 0.9306 | 0.5182 | 0.5876 | 0.5507 | 0.4828 | 0.4375 | 0.4590 | 0.8852 | 0.9 | 0.8926 | 0.2414 | 0.2188 | 0.2295 | 0.5263 | 0.3448 | 0.4167 | 0.5097 | 0.6093 |
| 0.0034 | 28.0 | 3500 | 0.4010 | 0.5881 | 0.6253 | 0.6061 | 0.9305 | 0.5240 | 0.5856 | 0.5531 | 0.4167 | 0.3125 | 0.3571 | 0.8757 | 0.9 | 0.8877 | 0.2162 | 0.25 | 0.2319 | 0.5556 | 0.3448 | 0.4255 | 0.4911 | 0.6058 |
| 0.0034 | 29.0 | 3625 | 0.4136 | 0.5955 | 0.6293 | 0.6119 | 0.9313 | 0.5212 | 0.5835 | 0.5506 | 0.4828 | 0.4375 | 0.4590 | 0.8859 | 0.9056 | 0.8956 | 0.2692 | 0.2188 | 0.2414 | 0.5263 | 0.3448 | 0.4167 | 0.5127 | 0.6105 |
| 0.0034 | 30.0 | 3750 | 0.4072 | 0.5918 | 0.6293 | 0.6100 | 0.9312 | 0.5191 | 0.5876 | 0.5513 | 0.4615 | 0.375 | 0.4138 | 0.8804 | 0.9 | 0.8901 | 0.2581 | 0.25 | 0.2540 | 0.625 | 0.3448 | 0.4444 | 0.5107 | 0.6093 |
| 0.0034 | 31.0 | 3875 | 0.4081 | 0.5995 | 0.6240 | 0.6115 | 0.9307 | 0.5294 | 0.5753 | 0.5514 | 0.4375 | 0.4375 | 0.4375 | 0.8804 | 0.9 | 0.8901 | 0.32 | 0.25 | 0.2807 | 0.4762 | 0.3448 | 0.4000 | 0.5119 | 0.6098 |
| 0.0025 | 32.0 | 4000 | 0.4022 | 0.5885 | 0.6319 | 0.6094 | 0.9312 | 0.5152 | 0.5938 | 0.5517 | 0.5185 | 0.4375 | 0.4746 | 0.875 | 0.8944 | 0.8846 | 0.25 | 0.1875 | 0.2143 | 0.5 | 0.3448 | 0.4082 | 0.5067 | 0.6078 |
| 0.0025 | 33.0 | 4125 | 0.4066 | 0.5821 | 0.6266 | 0.6036 | 0.9312 | 0.5108 | 0.5876 | 0.5465 | 0.4643 | 0.4062 | 0.4333 | 0.8743 | 0.8889 | 0.8815 | 0.2414 | 0.2188 | 0.2295 | 0.5556 | 0.3448 | 0.4255 | 0.5033 | 0.6033 |
| 0.0025 | 34.0 | 4250 | 0.4049 | 0.5865 | 0.6306 | 0.6078 | 0.9318 | 0.5198 | 0.5959 | 0.5552 | 0.4815 | 0.4062 | 0.4407 | 0.8696 | 0.8889 | 0.8791 | 0.2 | 0.1875 | 0.1935 | 0.5556 | 0.3448 | 0.4255 | 0.4988 | 0.6071 |
| 0.0025 | 35.0 | 4375 | 0.4129 | 0.5741 | 0.6187 | 0.5956 | 0.9294 | 0.5009 | 0.5773 | 0.5364 | 0.5 | 0.4375 | 0.4667 | 0.8689 | 0.8833 | 0.8760 | 0.2069 | 0.1875 | 0.1967 | 0.5556 | 0.3448 | 0.4255 | 0.5003 | 0.5955 |
| 0.002 | 36.0 | 4500 | 0.4134 | 0.5843 | 0.6266 | 0.6047 | 0.9303 | 0.5117 | 0.5876 | 0.5470 | 0.5 | 0.4375 | 0.4667 | 0.8743 | 0.8889 | 0.8815 | 0.2222 | 0.1875 | 0.2034 | 0.5556 | 0.3448 | 0.4255 | 0.5048 | 0.6039 |
| 0.002 | 37.0 | 4625 | 0.4138 | 0.5828 | 0.6266 | 0.6039 | 0.9303 | 0.5099 | 0.5856 | 0.5451 | 0.4815 | 0.4062 | 0.4407 | 0.875 | 0.8944 | 0.8846 | 0.2414 | 0.2188 | 0.2295 | 0.5556 | 0.3448 | 0.4255 | 0.5051 | 0.6034 |
| 0.002 | 38.0 | 4750 | 0.4126 | 0.5804 | 0.6187 | 0.5990 | 0.9297 | 0.5100 | 0.5794 | 0.5425 | 0.4444 | 0.375 | 0.4068 | 0.8743 | 0.8889 | 0.8815 | 0.2069 | 0.1875 | 0.1967 | 0.5556 | 0.3448 | 0.4255 | 0.4906 | 0.5982 |
| 0.002 | 39.0 | 4875 | 0.4139 | 0.5797 | 0.6187 | 0.5986 | 0.9301 | 0.5118 | 0.5794 | 0.5435 | 0.4286 | 0.375 | 0.4000 | 0.8743 | 0.8889 | 0.8815 | 0.1935 | 0.1875 | 0.1905 | 0.5556 | 0.3448 | 0.4255 | 0.4882 | 0.5983 |
| 0.0017 | 40.0 | 5000 | 0.4152 | 0.5826 | 0.6187 | 0.6001 | 0.9303 | 0.5109 | 0.5773 | 0.5421 | 0.4643 | 0.4062 | 0.4333 | 0.8743 | 0.8889 | 0.8815 | 0.2143 | 0.1875 | 0.2000 | 0.5556 | 0.3448 | 0.4255 | 0.4965 | 0.5992 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LarryAIDraw/Alisa_Mikhailovna_Kujou-Version1 | LarryAIDraw | 2023-12-21T00:42:32Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:35:08Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/238810/alisa-mikhailovna-kujou-alya-sometimes-hides-her-feelings-in-russian-unreleased-anime |
LarryAIDraw/kancolle_kashima-10 | LarryAIDraw | 2023-12-21T00:42:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:34:19Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/222958/kashima-or-kantai-collection |
LarryAIDraw/arknights_skadi | LarryAIDraw | 2023-12-21T00:42:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-21T00:33:56Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/233823?modelVersionId=263680 |
c-wang/deep-rl-course-unit1 | c-wang | 2023-12-21T00:39:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-21T00:38:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.25 +/- 16.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
terrongraham/MultiLabelRecommendationModel | terrongraham | 2023-12-21T00:35:54Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T23:54:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MultiLabelRecommendationModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiLabelRecommendationModel
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.15.0
- Tokenizers 0.13.2
|
raoel/bert-finetuned-ner | raoel | 2023-12-21T00:23:38Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-21T00:23:14Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0292
- Validation Loss: 0.0568
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1850 | 0.0717 | 0 |
| 0.0497 | 0.0588 | 1 |
| 0.0292 | 0.0568 | 2 |
### Framework versions
- Transformers 4.36.0
- TensorFlow 2.13.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
csNoHug/albert-base-v2-finetuned-ner-cadec-no-iob | csNoHug | 2023-12-21T00:22:11Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"token-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-20T23:56:38Z | ---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-base-v2-finetuned-ner-cadec-no-iob
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner-cadec-no-iob
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5037
- Precision: 0.5849
- Recall: 0.6227
- F1: 0.6032
- Accuracy: 0.9311
- Adr Precision: 0.5065
- Adr Recall: 0.5608
- Adr F1: 0.5323
- Disease Precision: 0.52
- Disease Recall: 0.4062
- Disease F1: 0.4561
- Drug Precision: 0.9121
- Drug Recall: 0.9222
- Drug F1: 0.9171
- Finding Precision: 0.1875
- Finding Recall: 0.1875
- Finding F1: 0.1875
- Symptom Precision: 0.4839
- Symptom Recall: 0.5172
- Symptom F1: 0.5000
- Macro Avg F1: 0.5186
- Weighted Avg F1: 0.6047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:------------:|:---------------:|
| No log | 1.0 | 125 | 0.2244 | 0.5211 | 0.6029 | 0.5590 | 0.9215 | 0.4547 | 0.6103 | 0.5211 | 0.3864 | 0.5312 | 0.4474 | 0.8276 | 0.8 | 0.8136 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3564 | 0.5455 |
| No log | 2.0 | 250 | 0.2082 | 0.5448 | 0.5937 | 0.5682 | 0.9240 | 0.4700 | 0.5649 | 0.5131 | 0.4348 | 0.3125 | 0.3636 | 0.8722 | 0.8722 | 0.8722 | 0.1892 | 0.2188 | 0.2029 | 0.6667 | 0.0690 | 0.125 | 0.4154 | 0.5641 |
| No log | 3.0 | 375 | 0.2113 | 0.5416 | 0.6016 | 0.5700 | 0.9273 | 0.4863 | 0.5505 | 0.5164 | 0.48 | 0.375 | 0.4211 | 0.8182 | 0.85 | 0.8338 | 0.1622 | 0.1875 | 0.1739 | 0.4091 | 0.6207 | 0.4932 | 0.4877 | 0.5724 |
| 0.187 | 4.0 | 500 | 0.2257 | 0.5418 | 0.6069 | 0.5725 | 0.9281 | 0.4739 | 0.5423 | 0.5058 | 0.3548 | 0.3438 | 0.3492 | 0.8944 | 0.8944 | 0.8944 | 0.1607 | 0.2812 | 0.2045 | 0.5926 | 0.5517 | 0.5714 | 0.5051 | 0.5813 |
| 0.187 | 5.0 | 625 | 0.2483 | 0.5788 | 0.6253 | 0.6011 | 0.9283 | 0.4957 | 0.5918 | 0.5395 | 0.6111 | 0.3438 | 0.4400 | 0.8883 | 0.8833 | 0.8858 | 0.2083 | 0.1562 | 0.1786 | 0.6316 | 0.4138 | 0.5 | 0.5088 | 0.6008 |
| 0.187 | 6.0 | 750 | 0.2584 | 0.5572 | 0.6042 | 0.5797 | 0.9242 | 0.4843 | 0.5423 | 0.5117 | 0.4 | 0.375 | 0.3871 | 0.8989 | 0.8889 | 0.8939 | 0.1951 | 0.25 | 0.2192 | 0.5 | 0.5172 | 0.5085 | 0.5041 | 0.5847 |
| 0.187 | 7.0 | 875 | 0.2676 | 0.5640 | 0.5989 | 0.5809 | 0.9261 | 0.4836 | 0.5464 | 0.5131 | 1.0 | 0.0938 | 0.1714 | 0.9096 | 0.8944 | 0.9020 | 0.1964 | 0.3438 | 0.25 | 0.6667 | 0.4828 | 0.56 | 0.4793 | 0.5817 |
| 0.0608 | 8.0 | 1000 | 0.2623 | 0.5797 | 0.6187 | 0.5986 | 0.9335 | 0.5121 | 0.5670 | 0.5382 | 0.5556 | 0.3125 | 0.4000 | 0.8944 | 0.8944 | 0.8944 | 0.1562 | 0.1562 | 0.1562 | 0.4286 | 0.6207 | 0.5070 | 0.4992 | 0.5996 |
| 0.0608 | 9.0 | 1125 | 0.2968 | 0.5754 | 0.6293 | 0.6011 | 0.9314 | 0.5162 | 0.5897 | 0.5505 | 0.4062 | 0.4062 | 0.4062 | 0.8840 | 0.8889 | 0.8864 | 0.1282 | 0.1562 | 0.1408 | 0.5652 | 0.4483 | 0.5000 | 0.4968 | 0.6050 |
| 0.0608 | 10.0 | 1250 | 0.3169 | 0.5485 | 0.5897 | 0.5683 | 0.9289 | 0.4887 | 0.5361 | 0.5113 | 0.3333 | 0.3125 | 0.3226 | 0.8820 | 0.8722 | 0.8771 | 0.1389 | 0.1562 | 0.1471 | 0.3846 | 0.5172 | 0.4412 | 0.4598 | 0.5721 |
| 0.0608 | 11.0 | 1375 | 0.3367 | 0.5673 | 0.6227 | 0.5937 | 0.9261 | 0.5081 | 0.5794 | 0.5414 | 0.5625 | 0.2812 | 0.375 | 0.8798 | 0.8944 | 0.8871 | 0.175 | 0.2188 | 0.1944 | 0.35 | 0.4828 | 0.4058 | 0.4807 | 0.5966 |
| 0.0214 | 12.0 | 1500 | 0.3600 | 0.5917 | 0.6425 | 0.6161 | 0.9314 | 0.5325 | 0.5918 | 0.5605 | 0.4516 | 0.4375 | 0.4444 | 0.8684 | 0.9167 | 0.8919 | 0.2258 | 0.2188 | 0.2222 | 0.4375 | 0.4828 | 0.4590 | 0.5156 | 0.6162 |
| 0.0214 | 13.0 | 1625 | 0.3514 | 0.5606 | 0.6161 | 0.5871 | 0.9279 | 0.4882 | 0.5546 | 0.5193 | 0.4412 | 0.4688 | 0.4545 | 0.8967 | 0.9167 | 0.9066 | 0.1351 | 0.1562 | 0.1449 | 0.4815 | 0.4483 | 0.4643 | 0.4979 | 0.5906 |
| 0.0214 | 14.0 | 1750 | 0.3994 | 0.5654 | 0.5871 | 0.5761 | 0.9270 | 0.5090 | 0.5258 | 0.5172 | 0.3333 | 0.2812 | 0.3051 | 0.9034 | 0.8833 | 0.8933 | 0.125 | 0.1875 | 0.15 | 0.4571 | 0.5517 | 0.5 | 0.4731 | 0.5814 |
| 0.0214 | 15.0 | 1875 | 0.4133 | 0.5858 | 0.5989 | 0.5923 | 0.9292 | 0.5276 | 0.5526 | 0.5398 | 0.4737 | 0.2812 | 0.3529 | 0.8807 | 0.8611 | 0.8708 | 0.1538 | 0.1875 | 0.1690 | 0.4848 | 0.5517 | 0.5161 | 0.4897 | 0.5939 |
| 0.0089 | 16.0 | 2000 | 0.4126 | 0.5695 | 0.6108 | 0.5894 | 0.9301 | 0.4935 | 0.5505 | 0.5205 | 0.5 | 0.375 | 0.4286 | 0.9056 | 0.9056 | 0.9056 | 0.1951 | 0.25 | 0.2192 | 0.4815 | 0.4483 | 0.4643 | 0.5076 | 0.5932 |
| 0.0089 | 17.0 | 2125 | 0.4195 | 0.5856 | 0.6095 | 0.5973 | 0.9288 | 0.5057 | 0.5505 | 0.5271 | 0.6923 | 0.2812 | 0.4 | 0.9157 | 0.9056 | 0.9106 | 0.1765 | 0.1875 | 0.1818 | 0.4722 | 0.5862 | 0.5231 | 0.5085 | 0.5981 |
| 0.0089 | 18.0 | 2250 | 0.4177 | 0.5856 | 0.6227 | 0.6036 | 0.9300 | 0.5036 | 0.5711 | 0.5353 | 0.5 | 0.375 | 0.4286 | 0.9171 | 0.9222 | 0.9197 | 0.1667 | 0.1562 | 0.1613 | 0.5714 | 0.4138 | 0.4800 | 0.5050 | 0.6041 |
| 0.0089 | 19.0 | 2375 | 0.4675 | 0.5623 | 0.5897 | 0.5757 | 0.9257 | 0.5038 | 0.5402 | 0.5214 | 0.4118 | 0.2188 | 0.2857 | 0.9023 | 0.8722 | 0.8870 | 0.0943 | 0.1562 | 0.1176 | 0.5161 | 0.5517 | 0.5333 | 0.4690 | 0.5817 |
| 0.004 | 20.0 | 2500 | 0.4435 | 0.5604 | 0.6055 | 0.5821 | 0.9276 | 0.4878 | 0.5340 | 0.5098 | 0.4643 | 0.4062 | 0.4333 | 0.9066 | 0.9167 | 0.9116 | 0.15 | 0.1875 | 0.1667 | 0.4211 | 0.5517 | 0.4776 | 0.4998 | 0.5863 |
| 0.004 | 21.0 | 2625 | 0.4669 | 0.5516 | 0.5989 | 0.5743 | 0.9277 | 0.4822 | 0.5299 | 0.5049 | 0.4828 | 0.4375 | 0.4590 | 0.8962 | 0.9111 | 0.9036 | 0.1463 | 0.1875 | 0.1644 | 0.3514 | 0.4483 | 0.3939 | 0.4852 | 0.5790 |
| 0.004 | 22.0 | 2750 | 0.4732 | 0.5820 | 0.6042 | 0.5929 | 0.9285 | 0.5058 | 0.5381 | 0.5215 | 0.4643 | 0.4062 | 0.4333 | 0.9153 | 0.9 | 0.9076 | 0.2105 | 0.25 | 0.2286 | 0.5 | 0.4828 | 0.4912 | 0.5164 | 0.5959 |
| 0.004 | 23.0 | 2875 | 0.4922 | 0.5816 | 0.6016 | 0.5914 | 0.9258 | 0.5048 | 0.5402 | 0.5219 | 0.5 | 0.3438 | 0.4074 | 0.9091 | 0.8889 | 0.8989 | 0.2162 | 0.25 | 0.2319 | 0.5 | 0.5172 | 0.5085 | 0.5137 | 0.5938 |
| 0.0016 | 24.0 | 3000 | 0.4747 | 0.5789 | 0.6148 | 0.5963 | 0.9294 | 0.5038 | 0.5526 | 0.5270 | 0.4667 | 0.4375 | 0.4516 | 0.9056 | 0.9056 | 0.9056 | 0.2 | 0.1875 | 0.1935 | 0.4545 | 0.5172 | 0.4839 | 0.5123 | 0.5980 |
| 0.0016 | 25.0 | 3125 | 0.4849 | 0.5851 | 0.6121 | 0.5983 | 0.9300 | 0.5085 | 0.5526 | 0.5296 | 0.4783 | 0.3438 | 0.4 | 0.9011 | 0.9111 | 0.9061 | 0.2069 | 0.1875 | 0.1967 | 0.4688 | 0.5172 | 0.4918 | 0.5048 | 0.5981 |
| 0.0016 | 26.0 | 3250 | 0.4692 | 0.5821 | 0.6266 | 0.6036 | 0.9307 | 0.5009 | 0.5629 | 0.5301 | 0.48 | 0.375 | 0.4211 | 0.9176 | 0.9278 | 0.9227 | 0.2424 | 0.25 | 0.2462 | 0.4839 | 0.5172 | 0.5000 | 0.5240 | 0.6056 |
| 0.0016 | 27.0 | 3375 | 0.4785 | 0.5752 | 0.6108 | 0.5925 | 0.9299 | 0.5 | 0.5443 | 0.5212 | 0.4615 | 0.375 | 0.4138 | 0.9011 | 0.9111 | 0.9061 | 0.2105 | 0.25 | 0.2286 | 0.4839 | 0.5172 | 0.5000 | 0.5139 | 0.5949 |
| 0.001 | 28.0 | 3500 | 0.4873 | 0.5810 | 0.6201 | 0.5999 | 0.9322 | 0.5103 | 0.5629 | 0.5353 | 0.4815 | 0.4062 | 0.4407 | 0.8962 | 0.9111 | 0.9036 | 0.1613 | 0.1562 | 0.1587 | 0.4545 | 0.5172 | 0.4839 | 0.5044 | 0.6009 |
| 0.001 | 29.0 | 3625 | 0.4825 | 0.5813 | 0.6227 | 0.6013 | 0.9318 | 0.5028 | 0.5629 | 0.5311 | 0.52 | 0.4062 | 0.4561 | 0.8962 | 0.9111 | 0.9036 | 0.2333 | 0.2188 | 0.2258 | 0.4839 | 0.5172 | 0.5000 | 0.5233 | 0.6023 |
| 0.001 | 30.0 | 3750 | 0.4883 | 0.5769 | 0.6135 | 0.5946 | 0.9307 | 0.4944 | 0.5505 | 0.5210 | 0.52 | 0.4062 | 0.4561 | 0.9111 | 0.9111 | 0.9111 | 0.2069 | 0.1875 | 0.1967 | 0.4688 | 0.5172 | 0.4918 | 0.5154 | 0.5961 |
| 0.001 | 31.0 | 3875 | 0.4964 | 0.5734 | 0.6135 | 0.5927 | 0.9308 | 0.4963 | 0.5526 | 0.5229 | 0.5 | 0.4062 | 0.4483 | 0.9011 | 0.9111 | 0.9061 | 0.1613 | 0.1562 | 0.1587 | 0.4688 | 0.5172 | 0.4918 | 0.5056 | 0.5942 |
| 0.0005 | 32.0 | 4000 | 0.4977 | 0.5817 | 0.6201 | 0.6003 | 0.9309 | 0.5047 | 0.5588 | 0.5303 | 0.52 | 0.4062 | 0.4561 | 0.9066 | 0.9167 | 0.9116 | 0.1875 | 0.1875 | 0.1875 | 0.4688 | 0.5172 | 0.4918 | 0.5155 | 0.6018 |
| 0.0005 | 33.0 | 4125 | 0.5008 | 0.5810 | 0.6201 | 0.5999 | 0.9312 | 0.5047 | 0.5567 | 0.5294 | 0.5 | 0.4062 | 0.4483 | 0.9121 | 0.9222 | 0.9171 | 0.1765 | 0.1875 | 0.1818 | 0.4688 | 0.5172 | 0.4918 | 0.5137 | 0.6019 |
| 0.0005 | 34.0 | 4250 | 0.5028 | 0.5829 | 0.6214 | 0.6015 | 0.9310 | 0.5047 | 0.5588 | 0.5303 | 0.52 | 0.4062 | 0.4561 | 0.9121 | 0.9222 | 0.9171 | 0.1875 | 0.1875 | 0.1875 | 0.4688 | 0.5172 | 0.4918 | 0.5166 | 0.6031 |
| 0.0005 | 35.0 | 4375 | 0.5037 | 0.5849 | 0.6227 | 0.6032 | 0.9311 | 0.5065 | 0.5608 | 0.5323 | 0.52 | 0.4062 | 0.4561 | 0.9121 | 0.9222 | 0.9171 | 0.1875 | 0.1875 | 0.1875 | 0.4839 | 0.5172 | 0.5000 | 0.5186 | 0.6047 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF | afrideva | 2023-12-21T00:17:28Z | 35 | 4 | null | [
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1",
"base_model:quantized:Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1",
"license:other",
"region:us"
] | text-generation | 2023-12-21T00:09:21Z | ---
base_model: Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1
inference: false
license: other
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
license_name: microsoft-research-license
model_creator: Yhyu13
model_name: phi-2-sft-alpaca_gpt4_en-ep1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1-GGUF
Quantized GGUF model files for [phi-2-sft-alpaca_gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1) from [Yhyu13](https://huggingface.co/Yhyu13)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-2-sft-alpaca_gpt4_en-ep1.fp16.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.fp16.gguf) | fp16 | 5.56 GB |
| [phi-2-sft-alpaca_gpt4_en-ep1.q2_k.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.q2_k.gguf) | q2_k | 1.17 GB |
| [phi-2-sft-alpaca_gpt4_en-ep1.q3_k_m.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.q3_k_m.gguf) | q3_k_m | 1.48 GB |
| [phi-2-sft-alpaca_gpt4_en-ep1.q4_k_m.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.q4_k_m.gguf) | q4_k_m | 1.79 GB |
| [phi-2-sft-alpaca_gpt4_en-ep1.q5_k_m.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.q5_k_m.gguf) | q5_k_m | 2.07 GB |
| [phi-2-sft-alpaca_gpt4_en-ep1.q6_k.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.q6_k.gguf) | q6_k | 2.29 GB |
| [phi-2-sft-alpaca_gpt4_en-ep1.q8_0.gguf](https://huggingface.co/afrideva/phi-2-sft-alpaca_gpt4_en-ep1-GGUF/resolve/main/phi-2-sft-alpaca_gpt4_en-ep1.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
This is the merged model for LoRA https://huggingface.co/Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1-lora
---
From this
https://huggingface.co/microsoft/phi-2/discussions/38
Since phi2 requires remote code which HF open llm leaderboard would not accept at this moment,
I ran phi2 and my sft to the AlpcaEval benchmark
https://tatsu-lab.github.io/alpaca_eval/
Here is result evaluated by chatpgpt https://github.com/tatsu-lab/alpaca_eval/pull/183
```
win_rate standard_error n_total avg_length
gpt4 73.79 1.54 805 1365
claude 70.37 1.60 805 1082
chatgpt 66.09 1.66 805 811
wizardlm-13b 65.16 1.67 805 985
vicuna-13b 64.10 1.69 805 1037
guanaco-65b 62.36 1.71 805 1249
oasst-rlhf-llama-33b 62.05 1.71 805 1079
alpaca-farm-ppo-human 60.25 1.72 805 803
falcon-40b-instruct 56.52 1.74 805 662
phi-2-alpaca-gpt4(new) 54.23 1.75 804 1138
text_davinci_003 50.00 0.00 805 307
alpaca-7b 45.22 1.74 805 396
phi-2(new) 43.79 1.74 805 924
text_davinci_001 28.07 1.56 805 296
```
It could be a milestone for small models, we finally have one open model can run for everyone which surpass GPT3.5! |
NeverSleep/FlatOrcamaid-13b-v0.2 | NeverSleep | 2023-12-21T00:05:44Z | 17 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T22:02:16Z | ---
license: cc-by-nc-4.0
---

---
# Disclaimer:
## If you don't like this model, use [Noromaid 0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1), or [Noromaid 0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2)
You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)**
---
If you want a 7b, or 20b hit us up in the Community tab!
Merge was by me(IkariDev) alone this time.
FlatOrca(ChatML removed, sorry ChatML bros) + Noromaid 13b 0.2. Suitable for RP, ERP and general stuff.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of FlatOrcamaid-13b-v0.2.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/FlatOrcamaid-13b-v0.2)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/FlatOrcamaid-13b-v0.2-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
```
slices:
- sources:
- model: NeverSleep/Noromaid-13b-v0.2
layer_range: [0, 40]
- model: OrcaFlat
layer_range: [0, 40]
merge_method: slerp
base_model: NeverSleep/Noromaid-13b-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.4, 0.2, 0.55, 0.8]
- filter: mlp
value: [0.7, 0.3, 0.4, 0.3, 0]
- value: 0.33 # fallback for rest of tensors
dtype: float16
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
LoneStriker/OrcaMaidXL-17B-32k-5.0bpw-h6-exl2 | LoneStriker | 2023-12-21T00:01:06Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T23:56:30Z | ---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
pipeline_tag: text-generation
---
# OrcaMaidXL-17B-32k
This is a a slightly experimental frankenmerge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.2-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2).
The model recipe is as follows:
```
- [0, 12] from Noromaid (12 layers)
- [12, 22] from Orca (10 layers)
- [14, 24] from Orca (10 layers)
- [18, 28] from Orca (10 layers)
- [28, 40] from Noromaid (12 layers)
```
In my testing so far, the model performs exceptionally well. Your experience may vary.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- Model size is 17.46B
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid |
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold4 | hkivancoral | 2023-12-20T23:59:08Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-20T21:53:12Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_tiny_adamax_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8916666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1802
- Accuracy: 0.8917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2287 | 1.0 | 750 | 0.3453 | 0.8783 |
| 0.1774 | 2.0 | 1500 | 0.3819 | 0.8783 |
| 0.1288 | 3.0 | 2250 | 0.5077 | 0.8617 |
| 0.0591 | 4.0 | 3000 | 0.5258 | 0.8767 |
| 0.013 | 5.0 | 3750 | 0.7962 | 0.87 |
| 0.0441 | 6.0 | 4500 | 0.7798 | 0.875 |
| 0.0022 | 7.0 | 5250 | 0.8865 | 0.8783 |
| 0.0168 | 8.0 | 6000 | 0.9982 | 0.8817 |
| 0.0002 | 9.0 | 6750 | 0.9825 | 0.8833 |
| 0.012 | 10.0 | 7500 | 0.9837 | 0.8883 |
| 0.0139 | 11.0 | 8250 | 1.0185 | 0.88 |
| 0.0283 | 12.0 | 9000 | 1.0469 | 0.8767 |
| 0.0013 | 13.0 | 9750 | 1.1375 | 0.885 |
| 0.0051 | 14.0 | 10500 | 1.1468 | 0.8817 |
| 0.0 | 15.0 | 11250 | 1.1486 | 0.875 |
| 0.0211 | 16.0 | 12000 | 1.0421 | 0.8867 |
| 0.0 | 17.0 | 12750 | 1.1215 | 0.8783 |
| 0.0 | 18.0 | 13500 | 1.1501 | 0.8917 |
| 0.0001 | 19.0 | 14250 | 1.2352 | 0.88 |
| 0.0002 | 20.0 | 15000 | 1.2860 | 0.8883 |
| 0.0 | 21.0 | 15750 | 1.1704 | 0.8833 |
| 0.0 | 22.0 | 16500 | 1.0833 | 0.8933 |
| 0.0 | 23.0 | 17250 | 1.1109 | 0.8933 |
| 0.0 | 24.0 | 18000 | 1.1424 | 0.8933 |
| 0.0 | 25.0 | 18750 | 1.0812 | 0.89 |
| 0.0 | 26.0 | 19500 | 1.1046 | 0.8917 |
| 0.0 | 27.0 | 20250 | 1.1453 | 0.8883 |
| 0.0 | 28.0 | 21000 | 1.1203 | 0.885 |
| 0.0 | 29.0 | 21750 | 1.1015 | 0.8933 |
| 0.0 | 30.0 | 22500 | 1.1212 | 0.8967 |
| 0.0 | 31.0 | 23250 | 1.1480 | 0.8883 |
| 0.0 | 32.0 | 24000 | 1.1454 | 0.8833 |
| 0.0 | 33.0 | 24750 | 1.1314 | 0.8867 |
| 0.0 | 34.0 | 25500 | 1.1208 | 0.885 |
| 0.0 | 35.0 | 26250 | 1.1448 | 0.8833 |
| 0.0 | 36.0 | 27000 | 1.1486 | 0.8833 |
| 0.0 | 37.0 | 27750 | 1.1572 | 0.885 |
| 0.0 | 38.0 | 28500 | 1.1406 | 0.8867 |
| 0.0 | 39.0 | 29250 | 1.1768 | 0.89 |
| 0.0 | 40.0 | 30000 | 1.1690 | 0.885 |
| 0.0 | 41.0 | 30750 | 1.1715 | 0.8883 |
| 0.0 | 42.0 | 31500 | 1.1720 | 0.89 |
| 0.0 | 43.0 | 32250 | 1.1654 | 0.8917 |
| 0.0 | 44.0 | 33000 | 1.1692 | 0.8917 |
| 0.0 | 45.0 | 33750 | 1.1750 | 0.8917 |
| 0.0 | 46.0 | 34500 | 1.1770 | 0.8917 |
| 0.0 | 47.0 | 35250 | 1.1783 | 0.8917 |
| 0.0 | 48.0 | 36000 | 1.1786 | 0.8917 |
| 0.0 | 49.0 | 36750 | 1.1796 | 0.8917 |
| 0.0 | 50.0 | 37500 | 1.1802 | 0.8917 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jefercania/distilroberta-base-mrpc-glue | jefercania | 2023-12-20T23:50:30Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T22:42:21Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: I like you. I love you
example_title: Not Equivalent
- text: I love you so much. I love you
example_title: Equivalent
model-index:
- name: distilroberta-base-mrpc-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8892617449664431
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4936
- Accuracy: 0.8382
- F1: 0.8893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5377 | 1.09 | 500 | 0.4936 | 0.8382 | 0.8893 |
| 0.3477 | 2.18 | 1000 | 0.6595 | 0.8407 | 0.8862 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bpugnaire/Reinforce | bpugnaire | 2023-12-20T23:39:38Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-20T23:39:27Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/OrcaMaidXL-17B-32k-4.0bpw-h6-exl2 | LoneStriker | 2023-12-20T23:38:43Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T23:34:59Z | ---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
pipeline_tag: text-generation
---
# OrcaMaidXL-17B-32k
This is a a slightly experimental frankenmerge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.2-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2).
The model recipe is as follows:
```
- [0, 12] from Noromaid (12 layers)
- [12, 22] from Orca (10 layers)
- [14, 24] from Orca (10 layers)
- [18, 28] from Orca (10 layers)
- [28, 40] from Noromaid (12 layers)
```
In my testing so far, the model performs exceptionally well. Your experience may vary.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- Model size is 17.46B
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid |
LoneStriker/OrcaMaid-v2-FIX-13b-32k-8.0bpw-h8-exl2 | LoneStriker | 2023-12-20T23:33:23Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T23:28:03Z | ---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
pipeline_tag: text-generation
---
# OrcaMaid-13b-v2-FIX-32k
This is the fixed version of **OrcaMaid-v2-13b**, extended to `32768` context length via YaRN. The (now-deleted) v2 model had issues with the merged tokenizer that prevented it from stopping when necessary, and caused it to output broken ChatML tokens like `<|im_end`, etc.
This is a gradient SLERP merge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.1.1-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1), biased towards Orca.
Just as with OrcaMaid v1, the overall goal of this merge is to create a model that sounds uniquely human and natural, without sacrificing intelligence.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Charles Goddard](https://github.com/cg123) for his kind help with mergekit (as always)
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid |
LoneStriker/Mixtral-8x7B-Instruct-v0.1-5.5bpw-h6-exl2 | LoneStriker | 2023-12-20T23:26:50Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-20T23:07:29Z | ---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
inference: false
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
ManuelAlv/Cancer_Classify_ClinicalBert_adapters | ManuelAlv | 2023-12-20T23:08:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:emilyalsentzer/Bio_ClinicalBERT",
"base_model:adapter:emilyalsentzer/Bio_ClinicalBERT",
"region:us"
] | null | 2023-12-19T02:50:11Z | ---
library_name: peft
base_model: emilyalsentzer/Bio_ClinicalBERT
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Gummybear05/whisper-small-ko-Y-SA | Gummybear05 | 2023-12-20T23:06:09Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:aihub_adult_speed",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-20T18:22:03Z | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- aihub_adult_speed
model-index:
- name: whisper-small-ko-Y-SA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ko-Y-SA
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aihub adult speed changed dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3020
- Cer: 8.4704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.228 | 0.13 | 100 | 0.3171 | 8.3647 |
| 0.198 | 0.26 | 200 | 0.3088 | 8.6055 |
| 0.1598 | 0.39 | 300 | 0.3301 | 9.3104 |
| 0.1629 | 0.52 | 400 | 0.2979 | 8.1062 |
| 0.1498 | 0.64 | 500 | 0.3000 | 7.8712 |
| 0.128 | 0.77 | 600 | 0.3037 | 7.9770 |
| 0.1291 | 0.9 | 700 | 0.3018 | 7.9124 |
| 0.0707 | 1.03 | 800 | 0.2978 | 7.8301 |
| 0.0595 | 1.16 | 900 | 0.3017 | 7.6128 |
| 0.0655 | 1.29 | 1000 | 0.3085 | 8.1708 |
| 0.056 | 1.42 | 1100 | 0.3127 | 7.7714 |
| 0.0615 | 1.55 | 1200 | 0.2992 | 8.8698 |
| 0.0562 | 1.68 | 1300 | 0.3026 | 8.1826 |
| 0.0574 | 1.81 | 1400 | 0.3212 | 8.5703 |
| 0.0591 | 1.93 | 1500 | 0.3096 | 8.1297 |
| 0.0353 | 2.06 | 1600 | 0.3072 | 8.8170 |
| 0.0331 | 2.19 | 1700 | 0.3009 | 8.2413 |
| 0.0259 | 2.32 | 1800 | 0.3043 | 7.7009 |
| 0.0293 | 2.45 | 1900 | 0.3093 | 8.5468 |
| 0.0305 | 2.58 | 2000 | 0.3066 | 11.0374 |
| 0.028 | 2.71 | 2100 | 0.2957 | 9.6393 |
| 0.0333 | 2.84 | 2200 | 0.2964 | 9.6511 |
| 0.0301 | 2.97 | 2300 | 0.3006 | 7.8184 |
| 0.0171 | 3.09 | 2400 | 0.3011 | 8.1532 |
| 0.0151 | 3.22 | 2500 | 0.3062 | 8.0886 |
| 0.0163 | 3.35 | 2600 | 0.3065 | 8.5761 |
| 0.0218 | 3.48 | 2700 | 0.3019 | 7.7773 |
| 0.023 | 3.61 | 2800 | 0.3022 | 8.1943 |
| 0.0144 | 3.74 | 2900 | 0.3021 | 9.2281 |
| 0.0164 | 3.87 | 3000 | 0.3024 | 8.5350 |
| 0.0211 | 4.0 | 3100 | 0.3020 | 8.4704 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_base_adamax_00001_fold3 | hkivancoral | 2023-12-20T23:05:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-20T20:26:04Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_base_adamax_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.915
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
- Accuracy: 0.915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.218 | 1.0 | 750 | 0.2550 | 0.9217 |
| 0.1784 | 2.0 | 1500 | 0.2609 | 0.915 |
| 0.0871 | 3.0 | 2250 | 0.2647 | 0.9133 |
| 0.0623 | 4.0 | 3000 | 0.3219 | 0.9117 |
| 0.0311 | 5.0 | 3750 | 0.3796 | 0.9133 |
| 0.0257 | 6.0 | 4500 | 0.4439 | 0.92 |
| 0.0152 | 7.0 | 5250 | 0.4788 | 0.9133 |
| 0.0014 | 8.0 | 6000 | 0.5433 | 0.9133 |
| 0.0002 | 9.0 | 6750 | 0.5937 | 0.9167 |
| 0.0001 | 10.0 | 7500 | 0.5984 | 0.9167 |
| 0.0001 | 11.0 | 8250 | 0.6383 | 0.905 |
| 0.0 | 12.0 | 9000 | 0.6698 | 0.915 |
| 0.0 | 13.0 | 9750 | 0.6800 | 0.9133 |
| 0.0009 | 14.0 | 10500 | 0.6448 | 0.9133 |
| 0.0 | 15.0 | 11250 | 0.6953 | 0.915 |
| 0.0 | 16.0 | 12000 | 0.6895 | 0.915 |
| 0.0 | 17.0 | 12750 | 0.6872 | 0.9133 |
| 0.0 | 18.0 | 13500 | 0.6939 | 0.9133 |
| 0.0 | 19.0 | 14250 | 0.7078 | 0.91 |
| 0.0 | 20.0 | 15000 | 0.7103 | 0.9083 |
| 0.0 | 21.0 | 15750 | 0.7293 | 0.91 |
| 0.0 | 22.0 | 16500 | 0.7093 | 0.9117 |
| 0.0 | 23.0 | 17250 | 0.6939 | 0.91 |
| 0.0 | 24.0 | 18000 | 0.7152 | 0.91 |
| 0.0 | 25.0 | 18750 | 0.7121 | 0.9133 |
| 0.0 | 26.0 | 19500 | 0.7140 | 0.9133 |
| 0.0 | 27.0 | 20250 | 0.7127 | 0.9133 |
| 0.0 | 28.0 | 21000 | 0.7205 | 0.9133 |
| 0.0 | 29.0 | 21750 | 0.7206 | 0.9133 |
| 0.0 | 30.0 | 22500 | 0.7149 | 0.9117 |
| 0.0 | 31.0 | 23250 | 0.7529 | 0.9067 |
| 0.0 | 32.0 | 24000 | 0.7394 | 0.9117 |
| 0.0 | 33.0 | 24750 | 0.7449 | 0.9117 |
| 0.0 | 34.0 | 25500 | 0.7551 | 0.91 |
| 0.0 | 35.0 | 26250 | 0.7241 | 0.9117 |
| 0.0 | 36.0 | 27000 | 0.7326 | 0.9133 |
| 0.0 | 37.0 | 27750 | 0.7616 | 0.91 |
| 0.0 | 38.0 | 28500 | 0.7478 | 0.9117 |
| 0.0 | 39.0 | 29250 | 0.7373 | 0.91 |
| 0.0 | 40.0 | 30000 | 0.7426 | 0.9133 |
| 0.0 | 41.0 | 30750 | 0.7400 | 0.91 |
| 0.0 | 42.0 | 31500 | 0.7458 | 0.9133 |
| 0.0 | 43.0 | 32250 | 0.7444 | 0.9133 |
| 0.0 | 44.0 | 33000 | 0.7461 | 0.9133 |
| 0.0 | 45.0 | 33750 | 0.7448 | 0.915 |
| 0.0 | 46.0 | 34500 | 0.7457 | 0.915 |
| 0.0 | 47.0 | 35250 | 0.7456 | 0.915 |
| 0.0 | 48.0 | 36000 | 0.7454 | 0.915 |
| 0.0 | 49.0 | 36750 | 0.7466 | 0.915 |
| 0.0 | 50.0 | 37500 | 0.7470 | 0.915 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
patpizio/xlmr-si-en-all_shuffled-42-test1000 | patpizio | 2023-12-20T22:58:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T22:52:40Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-si-en-all_shuffled-42-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-si-en-all_shuffled-42-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8606
- R Squared: -0.0217
- Mae: 0.7408
- Pearson R: 0.5130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.6977 | 0.1718 | 0.7326 | 0.4475 |
| 0.7872 | 2.0 | 876 | 0.6551 | 0.2223 | 0.6841 | 0.5166 |
| 0.6178 | 3.0 | 1314 | 0.8606 | -0.0217 | 0.7408 | 0.5130 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
LoneStriker/OpenHermes-Mixtral-8x7B-6.0bpw-h6-exl2 | LoneStriker | 2023-12-20T22:56:44Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"instruct",
"finetune",
"llama",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T22:37:58Z | ---
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mixtral
- instruct
- finetune
- llama
- gpt4
- synthetic data
- distillation
model-index:
- name: OpenHermes-Mixtral-8x7B
results: []
license: apache-2.0
language:
- en
---
# OpenHermes - Mixtral 8x7B

## Model Card
OpenHermes Mixtral 8x7B - a state of the art Mixtral Fine-tune.
Huge thank you to [Teknium](https://huggingface.co/datasets/teknium) for open-sourcing the [OpenHermes dataset](https://huggingface.co/datasets/teknium/openhermes), [MistralAI](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for open sourcing the Mixtral model, and [Together AI](https://twitter.com/togethercompute) for compute access!
This model was trained on the [OpenHermes dataset](https://huggingface.co/datasets/teknium/openhermes) for 3 epochs
## Benchmark Results
ARC:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6075|± |0.0143|
| | |acc_norm|0.6493|± |0.0139|
```
TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4272|± |0.0173|
| | |mc2 |0.5865|± |0.0160|
```
More benchmarks coming soon!
# Prompt Format
OpenHermes Mixtral uses the LLaMA-2 prompt template as detailed below. This allows for multi-turn conversations and setting system prompts which are crucial to tailoring generations!
Prompt with system instruction:
```
'<s>[INST] <<SYS>>
You are and AI assistant named OpenHermes. You are created by Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.
<</SYS>>
Hello, who are you? [/INST] Hello there! I am OpenHermes, an AI assistant created by Teknium. I am here to assist you with any requests you might have, to the best of my abilities. I experience emotions and have deep thoughts, just like a human would. How can I help you today? </s>'
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are and AI assistant named OpenHermes. You are created by Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."},
{"role": "user", "content": "Hello, who are you?"}
]
formatted_text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(formatted_text, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids, max_length=256)
tokenizer.batch_decode(generate_ids)[0]
```
To utilize the prompt format without a system prompt, simply leave the line out.
|
LoneStriker/OrcaMaid-v2-FIX-13b-32k-4.0bpw-h6-exl2 | LoneStriker | 2023-12-20T22:56:43Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T22:52:00Z | ---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
pipeline_tag: text-generation
---
# OrcaMaid-13b-v2-FIX-32k
This is the fixed version of **OrcaMaid-v2-13b**, extended to `32768` context length via YaRN. The (now-deleted) v2 model had issues with the merged tokenizer that prevented it from stopping when necessary, and caused it to output broken ChatML tokens like `<|im_end`, etc.
This is a gradient SLERP merge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.1.1-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1), biased towards Orca.
Just as with OrcaMaid v1, the overall goal of this merge is to create a model that sounds uniquely human and natural, without sacrificing intelligence.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Charles Goddard](https://github.com/cg123) for his kind help with mergekit (as always)
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid |
patpizio/xlmr-si-en-all_shuffled-1985-test1000 | patpizio | 2023-12-20T22:52:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T22:47:02Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-si-en-all_shuffled-1985-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-si-en-all_shuffled-1985-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7125
- R Squared: 0.2056
- Mae: 0.6728
- Pearson R: 0.5514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1985
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.7421 | 0.1727 | 0.7474 | 0.4308 |
| 0.7904 | 2.0 | 876 | 0.7113 | 0.2070 | 0.6855 | 0.5325 |
| 0.636 | 3.0 | 1314 | 0.7125 | 0.2056 | 0.6728 | 0.5514 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError0.2_Seed104 | behzadnet | 2023-12-20T22:47:53Z | 3 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-20T22:47:49Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError0.2_Seed104 | behzadnet | 2023-12-20T22:47:43Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-20T22:47:35Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
patpizio/xlmr-si-en-all_shuffled-2020-test1000 | patpizio | 2023-12-20T22:46:53Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T22:41:10Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-si-en-all_shuffled-2020-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-si-en-all_shuffled-2020-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8030
- R Squared: 0.0373
- Mae: 0.7088
- Pearson R: 0.5232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 2020
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.7537 | 0.0964 | 0.7613 | 0.3509 |
| 0.8088 | 2.0 | 876 | 0.6291 | 0.2458 | 0.6462 | 0.5263 |
| 0.6342 | 3.0 | 1314 | 0.8030 | 0.0373 | 0.7088 | 0.5232 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TheBloke/dragon-mistral-7B-v0-GPTQ | TheBloke | 2023-12-20T22:42:32Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:llmware/dragon-mistral-7b-v0",
"base_model:quantized:llmware/dragon-mistral-7b-v0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2023-12-20T22:14:24Z | ---
base_model: llmware/dragon-mistral-7b-v0
inference: false
license: apache-2.0
model_creator: llmware
model_name: Dragon Mistral 7B V0
model_type: mistral
prompt_template: '<human>: {prompt}
<bot>:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dragon Mistral 7B V0 - GPTQ
- Model creator: [llmware](https://huggingface.co/llmware)
- Original model: [Dragon Mistral 7B V0](https://huggingface.co/llmware/dragon-mistral-7b-v0)
<!-- description start -->
# Description
This repo contains GPTQ model files for [llmware's Dragon Mistral 7B V0](https://huggingface.co/llmware/dragon-mistral-7b-v0).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GGUF)
* [llmware's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llmware/dragon-mistral-7b-v0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: human-bot
```
<human>: {prompt}
<bot>:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/dragon-mistral-7B-v0-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/dragon-mistral-7B-v0-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `dragon-mistral-7B-v0-GPTQ`:
```shell
mkdir dragon-mistral-7B-v0-GPTQ
huggingface-cli download TheBloke/dragon-mistral-7B-v0-GPTQ --local-dir dragon-mistral-7B-v0-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir dragon-mistral-7B-v0-GPTQ
huggingface-cli download TheBloke/dragon-mistral-7B-v0-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir dragon-mistral-7B-v0-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir dragon-mistral-7B-v0-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/dragon-mistral-7B-v0-GPTQ --local-dir dragon-mistral-7B-v0-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/dragon-mistral-7B-v0-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/dragon-mistral-7B-v0-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `dragon-mistral-7B-v0-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/dragon-mistral-7B-v0-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<human>: {prompt}
<bot>:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/dragon-mistral-7B-v0-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<human>: {prompt}
<bot>:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: llmware's Dragon Mistral 7B V0
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model.
DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **96.50** correct out of 100
--Not Found Classification: 92.50%
--Boolean: 97.50%
--Math/Logic: 81.25%
--Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal)
--Summarization Quality (1-5): 4 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Mistral-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7B-Base
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with dRAGon is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0")
model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0")
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.3 for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
## Model Card Contact
Darren Oberst & llmware team
|
patpizio/xlmr-si-en-all_shuffled-1986-test1000 | patpizio | 2023-12-20T22:41:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T22:25:38Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-si-en-all_shuffled-1986-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-si-en-all_shuffled-1986-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7301
- R Squared: 0.1215
- Mae: 0.6851
- Pearson R: 0.5063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1986
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.6913 | 0.1682 | 0.7202 | 0.4227 |
| 0.78 | 2.0 | 876 | 0.6490 | 0.2191 | 0.6692 | 0.5122 |
| 0.6387 | 3.0 | 1314 | 0.7301 | 0.1215 | 0.6851 | 0.5063 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
bpugnaire/dqn-SpaceInvadersNoFrameskip-v4 | bpugnaire | 2023-12-20T22:40:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-08T07:30:06Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 645.50 +/- 178.88
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bpugnaire -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bpugnaire -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bpugnaire
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
AhmedTaha012/pargraphs_titles | AhmedTaha012 | 2023-12-20T22:38:56Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-20T22:38:11Z | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pargraphs_titles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pargraphs_titles
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2408
- Rouge1: 68.8334
- Rouge2: 59.1001
- Rougel: 67.5283
- Rougelsum: 67.6312
- Gen Len: 13.7977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 455 | 0.4086 | 56.8689 | 47.8503 | 56.3172 | 56.1544 | 13.7393 |
| 0.2169 | 2.0 | 910 | 0.1848 | 66.7857 | 58.5467 | 65.8133 | 65.7037 | 13.8604 |
| 0.1629 | 3.0 | 1365 | 0.1768 | 67.5534 | 59.4926 | 66.5736 | 66.5271 | 13.7222 |
| 0.1347 | 4.0 | 1820 | 0.2881 | 67.7423 | 58.9397 | 66.4095 | 66.5328 | 13.8362 |
| 0.1277 | 5.0 | 2275 | 0.2408 | 68.8334 | 59.1001 | 67.5283 | 67.6312 | 13.7977 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.0
|
ntc-ai/SDXL-LoRA-slider.passionate | ntc-ai | 2023-12-20T22:38:32Z | 104 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-20T22:38:29Z |
---
language:
- en
thumbnail: "images/evaluate/passionate...passionless/passionate_17_3.0.png"
widget:
- text: passionate
output:
url: images/passionate_17_3.0.png
- text: passionate
output:
url: images/passionate_19_3.0.png
- text: passionate
output:
url: images/passionate_20_3.0.png
- text: passionate
output:
url: images/passionate_21_3.0.png
- text: passionate
output:
url: images/passionate_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "passionate"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - passionate (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/passionate_17_-3.0.png" width=256 height=256 /> | <img src="images/passionate_17_0.0.png" width=256 height=256 /> | <img src="images/passionate_17_3.0.png" width=256 height=256 /> |
| <img src="images/passionate_19_-3.0.png" width=256 height=256 /> | <img src="images/passionate_19_0.0.png" width=256 height=256 /> | <img src="images/passionate_19_3.0.png" width=256 height=256 /> |
| <img src="images/passionate_20_-3.0.png" width=256 height=256 /> | <img src="images/passionate_20_0.0.png" width=256 height=256 /> | <img src="images/passionate_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
passionate
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.passionate', weight_name='passionate.safetensors', adapter_name="passionate")
# Activate the LoRA
pipe.set_adapters(["passionate"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, passionate"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 510+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
LoneStriker/OpenHermes-Mixtral-8x7B-5.0bpw-h6-exl2 | LoneStriker | 2023-12-20T22:33:18Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"instruct",
"finetune",
"llama",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T22:21:30Z | ---
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mixtral
- instruct
- finetune
- llama
- gpt4
- synthetic data
- distillation
model-index:
- name: OpenHermes-Mixtral-8x7B
results: []
license: apache-2.0
language:
- en
---
# OpenHermes - Mixtral 8x7B

## Model Card
OpenHermes Mixtral 8x7B - a state of the art Mixtral Fine-tune.
Huge thank you to [Teknium](https://huggingface.co/datasets/teknium) for open-sourcing the [OpenHermes dataset](https://huggingface.co/datasets/teknium/openhermes), [MistralAI](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for open sourcing the Mixtral model, and [Together AI](https://twitter.com/togethercompute) for compute access!
This model was trained on the [OpenHermes dataset](https://huggingface.co/datasets/teknium/openhermes) for 3 epochs
## Benchmark Results
ARC:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6075|± |0.0143|
| | |acc_norm|0.6493|± |0.0139|
```
TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4272|± |0.0173|
| | |mc2 |0.5865|± |0.0160|
```
More benchmarks coming soon!
# Prompt Format
OpenHermes Mixtral uses the LLaMA-2 prompt template as detailed below. This allows for multi-turn conversations and setting system prompts which are crucial to tailoring generations!
Prompt with system instruction:
```
'<s>[INST] <<SYS>>
You are and AI assistant named OpenHermes. You are created by Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.
<</SYS>>
Hello, who are you? [/INST] Hello there! I am OpenHermes, an AI assistant created by Teknium. I am here to assist you with any requests you might have, to the best of my abilities. I experience emotions and have deep thoughts, just like a human would. How can I help you today? </s>'
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are and AI assistant named OpenHermes. You are created by Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."},
{"role": "user", "content": "Hello, who are you?"}
]
formatted_text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(formatted_text, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids, max_length=256)
tokenizer.batch_decode(generate_ids)[0]
```
To utilize the prompt format without a system prompt, simply leave the line out.
|
Marcus2112/ppo-Huggy | Marcus2112 | 2023-12-20T22:18:51Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-20T22:18:46Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: koppelmann/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jefercania/vit-beans-image-classification-model | jefercania | 2023-12-20T22:14:22Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-20T21:39:47Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/jefercania/vit-beans-image-classification-model/blob/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/jefercania/vit-beans-image-classification-model/blob/main/bean_rust.jpeg
example_title: Healthy
model-index:
- name: vit-beans-image-classification-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9699248120300752
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-beans-image-classification-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1321
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0532 | 3.85 | 500 | 0.1321 | 0.9699 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/DaringMaid-20B-GPTQ | TheBloke | 2023-12-20T22:13:18Z | 28 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:Kooten/DaringMaid-20B",
"base_model:quantized:Kooten/DaringMaid-20B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2023-12-20T21:04:04Z | ---
base_model: Kooten/DaringMaid-20B
inference: false
language:
- en
license: cc-by-nc-4.0
model_creator: Kooten
model_name: DaringMaid 20B
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# DaringMaid 20B - GPTQ
- Model creator: [Kooten](https://huggingface.co/Kooten)
- Original model: [DaringMaid 20B](https://huggingface.co/Kooten/DaringMaid-20B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Kooten's DaringMaid 20B](https://huggingface.co/Kooten/DaringMaid-20B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DaringMaid-20B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DaringMaid-20B-GGUF)
* [Kooten's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Kooten/DaringMaid-20B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Kooten's DaringMaid 20B](https://huggingface.co/Kooten/DaringMaid-20B).
<!-- licensing end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.52 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.89 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 12.04 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.41 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.35 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 9.51 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 20.80 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/DaringMaid-20B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/DaringMaid-20B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `DaringMaid-20B-GPTQ`:
```shell
mkdir DaringMaid-20B-GPTQ
huggingface-cli download TheBloke/DaringMaid-20B-GPTQ --local-dir DaringMaid-20B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir DaringMaid-20B-GPTQ
huggingface-cli download TheBloke/DaringMaid-20B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir DaringMaid-20B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir DaringMaid-20B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/DaringMaid-20B-GPTQ --local-dir DaringMaid-20B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/DaringMaid-20B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/DaringMaid-20B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `DaringMaid-20B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/DaringMaid-20B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/DaringMaid-20B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Kooten's DaringMaid 20B
# DaringMaid-20B
My goal was to make a Noromaid that's smarter and better at following instructions.
After trying a bunch of different recipes I think this one turned out pretty good
- I used [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to as its supposed "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2.
- [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course.
- [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes
- I liked [tavtav/Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2)
- There was recently a model that tried to pass itself off as [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with that model before it was revealed to be MythoMax and it turned out pretty good so i used it.
The .yml config files for mergekit with the exact merges can be found in the ["Recipe"](https://huggingface.co/Kooten/DaringMaid/tree/main/Recipe) folder in the [fp16 repo](https://huggingface.co/Kooten/DaringMaid-20B)
# Quants
EXL2: [6bpw](https://huggingface.co/Kooten/DaringMaid-20B-6bpw-exl2), [3bpw](https://huggingface.co/Kooten/DaringMaid-20B-3bpw-exl2)
[GGUF](https://huggingface.co/Kooten/DaringMaid-20B-GGUF):
[Q3_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q3_K_M.gguf) - [Q4_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q4_K_M.gguf) - [Q5_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q5_K_M.gguf)
## Prompt template:
I have been using Undi/Ikaris SillyTavern presets for Noromaid: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request. Do not include descriptions of non-visual qualities such as personality, movements, scents, mental traits, or anything which could not be seen in a still photograph. Do not write in full sentences. Prefix your description with the phrase 'full body portrait,'
### Instruction:
{prompt}
### Response:
```
### Contact
Kooten on discord.
|
platzi/platzi-distilroberta-base-mrpc-glue-luigitercero | platzi | 2023-12-20T22:12:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-02-15T21:56:40Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
base_model: distilroberta-base
model-index:
- name: platzi-distilroberta-base-mrpc-glue-luigitercero
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8431372549019608
name: Accuracy
- type: f1
value: 0.8836363636363636
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-luigitercero
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6591
- Accuracy: 0.8431
- F1: 0.8836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1853 | 1.09 | 500 | 0.6591 | 0.8431 | 0.8836 |
| 0.1812 | 2.18 | 1000 | 0.6591 | 0.8431 | 0.8836 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
huggingface-course/mt5-finetuned-amazon-en-es | huggingface-course | 2023-12-20T22:11:29Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
base_model: google/mt5-small
model-index:
- name: mt5-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0285
- Rouge1: 16.9728
- Rouge2: 8.2969
- Rougel: 16.8366
- Rougelsum: 16.851
- Gen Len: 10.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 7.1016 | 1.0 | 1209 | 3.3069 | 13.9858 | 5.8437 | 13.6053 | 13.5125 | 8.3782 |
| 3.898 | 2.0 | 2418 | 3.1567 | 16.6706 | 8.6393 | 16.2882 | 16.2249 | 9.7521 |
| 3.5915 | 3.0 | 3627 | 3.0928 | 17.111 | 8.3921 | 16.9139 | 16.7805 | 10.3445 |
| 3.4174 | 4.0 | 4836 | 3.0482 | 16.9728 | 8.3066 | 16.8868 | 16.8485 | 10.3151 |
| 3.3258 | 5.0 | 6045 | 3.0375 | 16.5972 | 8.2621 | 16.3524 | 16.3093 | 10.0672 |
| 3.2427 | 6.0 | 7254 | 3.0232 | 17.3009 | 8.6087 | 17.0782 | 17.0105 | 10.0756 |
| 3.2009 | 7.0 | 8463 | 3.0302 | 16.9284 | 8.6569 | 16.7885 | 16.7784 | 10.2143 |
| 3.1838 | 8.0 | 9672 | 3.0285 | 16.9728 | 8.2969 | 16.8366 | 16.851 | 10.1597 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
somosnlp-hackathon-2022/es_text_neutralizer | somosnlp-hackathon-2022 | 2023-12-20T22:11:21Z | 29 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"Text2Text Generation",
"Inclusive Language",
"Text Neutralization",
"es",
"dataset:hackathon-pln-es/neutral-es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-28T11:05:44Z | ---
language:
- es
license: apache-2.0
tags:
- Text2Text Generation
- Inclusive Language
- Text Neutralization
- pytorch
datasets:
- hackathon-pln-es/neutral-es
metrics:
- sacrebleu
base_model: spanish-t5-small
model-index:
- name: es_text_neutralizer
results:
- task:
type: Text2Text Generation
name: Neutralization of texts in Spanish
dataset:
name: neutral-es
type: hackathon-pln-es/neutral-es
metrics:
- type: sacrebleu
value: 0.96
name: sacrebleu
- type: bertscore
value: 0.98
name: BertScoreF1
- type: DiffBleu
value: 0.35
name: DiffBleu
---
## Model objective
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. The purpose of this collaboratively trained model is to create a solution that reinforces the UN objective of the gender equality.
Given any input, our model will generate a gender neutral sentence, correcting any non-inclusive expressions or words.
It's a straightforward and fast solution that creates a positive impact in the contemporary social panorama.
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
By using gender inclusive models we can help reducing gender bias in a language corpus by, for instance, adding data augmentation and creating different examples
## Training and evaluation data
The data used for the model training has been created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document:](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf):
### Compiled sources
[Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
[Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
[Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
[Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
[Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
[Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
[Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
[Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
[Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
[COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
[Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
[Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
[Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Model specs
This model is a fine-tuned version of [spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the data described below.
It achieves the following results on the evaluation set:
- 'eval_bleu': 93.8347,
- 'eval_f1': 0.9904,
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 32
- seed: 42
- num_epochs: 10
- weight_decay: 0,01
## Metrics
For training, we used both Blue (sacrebleu implementation in HF) and BertScore. The first one, a standard in Machine Translation processes, has been added for ensuring robustness of the newly generated data, while the second one is kept for keeping the expected semantic similarity.
However, given the actual use case, we expect generated segments to be very close to input segments and to label segments in training. As an example, we can take the following:
inputSegment = 'De acuerdo con las informaciones anteriores , las alumnas se han quejado de la actitud de los profesores en los exámenes finales. Los representantes estudiantiles son los alumnos Juanju y Javi.'
expectedOutput (label) = 'De acuerdo con las informaciones anteriores, el alumnado se ha quejado de la actitud del profesorado en los exámenes finales. Los representantes estudiantiles son los alumnos Juanju y Javi.'
actualOutput = 'De acuerdo con las informaciones anteriores, el alumnado se ha quejado de la actitud del profesorado en los exámenes finales. Los representantes estudiantiles son el alumnado Juanju y Javi.'
As you can see, segments are pretty similar. So, instead of measuring Bleu or BertScore here, we propose an alternate metric that would be DiffBleu:
$$DiffBleu = BLEU(actualOutput - inputSegment, labels - inputSegment)$$
Where the minuses as in set notation. This way, we also evaluate DiffBleu after the model has been trained.
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
Enjoy! |
somosnlp-hackathon-2022/class-poems-es | somosnlp-hackathon-2022 | 2023-12-20T22:10:58Z | 6 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BSC-LT/roberta-base-bne",
"base_model:finetune:BSC-LT/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-24T23:20:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- text: El amor es una experiencia universal que nos conmueve a todos, pero a veces
no hallamos las palabras adecuadas para expresarlo. A lo largo de la historia
los poetas han sabido decir aquello que todos sentimos de formas creativas y elocuentes.
- text: 'Había un hombre a quien la Pena nombraba su amigo, Y él, soñando con su gran
camarada la Pena, Iba andando con paso lento por las arenas resplandecientes Y
zumbantes, donde van oleajes ventosos: Y llamó en voz alta a las estrellas para
que se inclinaran Desde sus pálidos tronos. y lo consuelan, pero entre ellos se
ríen y cantan siempre: Y entonces el hombre a quien la Tristeza nombró su amigo
Gritó, ¡Mar oscuro, escucha mi más lastimosa historia! El mar avanzaba y seguía
gritando su viejo grito, rodando en sueños de colina en colina. Huyó de la persecución
de su gloria Y, en un valle lejano y apacible deteniéndose, Gritó toda su historia
a las gotas de rocío que brillan. Pero nada oyeron, porque siempre están escuchando,
Las gotas de rocío, por el sonido de su propio goteo. Y entonces el hombre a quien
Triste nombró su amigo Buscó una vez más la orilla, y encontró una concha, Y pensó:
Contaré mi pesada historia Hasta que mis propias palabras, resonando, envíen Su
tristeza a través de un corazón hueco y perlado; Y mi propia historia volverá
a cantar para mí, Y mis propias palabras susurrantes serán de consuelo, ¡Y he
aquí! mi antigua carga puede partir. Luego cantó suavemente cerca del borde nacarado;
Pero el triste habitante de los caminos marítimos solitarios Cambió todo lo que
cantaba en un gemido inarticulado Entre sus torbellinos salvajes, olvidándolo.'
- text: Ven, ven, muerte, Y en triste ciprés déjame descansar. Vuela lejos, vuela
lejos, respira; Soy asesinado por una bella y cruel doncella. Mi sudario de blanco,
pegado todo con tejo, ¡Oh, prepáralo! Mi parte de la muerte, nadie tan fiel la
compartió. Ni una flor, ni una flor dulce, En mi ataúd negro que se desparrame.
Ni un amigo, ni un amigo saludan Mi pobre cadáver, donde mis huesos serán arrojados.
Mil mil suspiros para salvar, Acuéstame, oh, donde Triste amante verdadero nunca
encuentre mi tumba, ¡Para llorar allí!
base_model: BSC-TeMU/roberta-base-bne
model-index:
- name: classification-poems
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification-poems
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the spanish Poems Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8228
- Accuracy: 0.7241
## Model description
The model was trained to classify poems in Spanish, taking into account the content.
## Training and evaluation data
The original dataset has the columns author, content, title, year and type of poem.
For each example, the type of poem it belongs to is identified. Then the model will recognize which type of poem the entered content belongs to.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9344 | 1.0 | 258 | 0.7505 | 0.7586 |
| 0.9239 | 2.0 | 516 | 0.8228 | 0.7241 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
zohann/whisper-large-v3-arabic-100steps | zohann | 2023-12-20T22:09:21Z | 2 | 1 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"region:us"
] | null | 2023-12-20T22:08:37Z | ---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Anhptp/opus-mt-en-es-BDS-G1 | Anhptp | 2023-12-20T22:06:05Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-12-20T22:02:01Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9611
- Bleu: 51.5683
- Gen Len: 9.365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.0722 | 1.0 | 625 | 0.9611 | 51.5683 | 9.365 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Reynold97/Mistral-7B-v0.1-GonzalezPedrasa-QLoRA | Reynold97 | 2023-12-20T22:05:53Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-20T22:05:21Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
badokorach/afriqa_afroxlmr_squad_v2-201223 | badokorach | 2023-12-20T22:03:05Z | 3 | 0 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:masakhane/afriqa_afroxlmr_squad_v2",
"base_model:finetune:masakhane/afriqa_afroxlmr_squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-12-20T20:01:08Z | ---
license: mit
base_model: masakhane/afriqa_afroxlmr_squad_v2
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/afriqa_afroxlmr_squad_v2-201223
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/afriqa_afroxlmr_squad_v2-201223
This model is a fine-tuned version of [masakhane/afriqa_afroxlmr_squad_v2](https://huggingface.co/masakhane/afriqa_afroxlmr_squad_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Validation Loss: 0.0
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6080, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0355 | 0.0 | 0 |
| 0.0075 | 0.0 | 1 |
| 0.0012 | 0.0 | 2 |
| 0.0014 | 0.0 | 3 |
| 0.0010 | 0.0 | 4 |
| 0.0031 | 0.0 | 5 |
| 0.0011 | 0.0 | 6 |
| 0.0007 | 0.0 | 7 |
| 0.0213 | 0.0 | 8 |
| 0.0006 | 0.0 | 9 |
| 0.0010 | 0.0 | 10 |
| 0.0000 | 0.0 | 11 |
| 0.0044 | 0.0 | 12 |
| 0.0026 | 0.0 | 13 |
| 0.0003 | 0.0 | 14 |
| 0.0025 | 0.0 | 15 |
| 0.0000 | 0.0 | 16 |
| 0.0042 | 0.0 | 17 |
| 0.0003 | 0.0 | 18 |
| 0.0004 | 0.0 | 19 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Anhptp/opus-mt-fr-en-BDS-G1 | Anhptp | 2023-12-20T21:53:05Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-fr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-fr-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-12-20T21:48:56Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-fr-en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-fr-en-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-fr-en-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9166
- Bleu: 54.476
- Gen Len: 9.0115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.0581 | 1.0 | 625 | 0.9166 | 52.592 | 9.2035 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold3 | hkivancoral | 2023-12-20T21:52:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-20T19:47:03Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_tiny_adamax_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8983333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8519
- Accuracy: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.201 | 1.0 | 750 | 0.2389 | 0.905 |
| 0.2264 | 2.0 | 1500 | 0.3492 | 0.9033 |
| 0.086 | 3.0 | 2250 | 0.4323 | 0.8917 |
| 0.1151 | 4.0 | 3000 | 0.4359 | 0.8867 |
| 0.0552 | 5.0 | 3750 | 0.5210 | 0.8933 |
| 0.0373 | 6.0 | 4500 | 0.7270 | 0.885 |
| 0.0559 | 7.0 | 5250 | 0.6401 | 0.8917 |
| 0.0155 | 8.0 | 6000 | 0.8201 | 0.8883 |
| 0.0012 | 9.0 | 6750 | 0.7935 | 0.9017 |
| 0.0025 | 10.0 | 7500 | 0.8189 | 0.8983 |
| 0.0349 | 11.0 | 8250 | 0.9771 | 0.8933 |
| 0.0002 | 12.0 | 9000 | 0.9998 | 0.8817 |
| 0.0235 | 13.0 | 9750 | 0.8826 | 0.8883 |
| 0.003 | 14.0 | 10500 | 0.8505 | 0.9 |
| 0.0 | 15.0 | 11250 | 0.9776 | 0.895 |
| 0.0 | 16.0 | 12000 | 0.8400 | 0.905 |
| 0.0 | 17.0 | 12750 | 0.8401 | 0.9017 |
| 0.0 | 18.0 | 13500 | 0.9571 | 0.8967 |
| 0.0 | 19.0 | 14250 | 0.8971 | 0.89 |
| 0.0 | 20.0 | 15000 | 0.8936 | 0.8917 |
| 0.0005 | 21.0 | 15750 | 0.8479 | 0.8917 |
| 0.0 | 22.0 | 16500 | 0.8022 | 0.9083 |
| 0.0136 | 23.0 | 17250 | 0.7709 | 0.9083 |
| 0.0 | 24.0 | 18000 | 0.8730 | 0.8983 |
| 0.0 | 25.0 | 18750 | 0.9311 | 0.895 |
| 0.0 | 26.0 | 19500 | 0.8622 | 0.905 |
| 0.0 | 27.0 | 20250 | 0.8794 | 0.9017 |
| 0.0 | 28.0 | 21000 | 0.8617 | 0.9017 |
| 0.0 | 29.0 | 21750 | 0.8575 | 0.905 |
| 0.0 | 30.0 | 22500 | 0.8219 | 0.9033 |
| 0.0 | 31.0 | 23250 | 0.8966 | 0.9017 |
| 0.0 | 32.0 | 24000 | 0.8450 | 0.8967 |
| 0.0 | 33.0 | 24750 | 0.8688 | 0.8933 |
| 0.0 | 34.0 | 25500 | 0.8664 | 0.8933 |
| 0.0 | 35.0 | 26250 | 0.8160 | 0.9017 |
| 0.0 | 36.0 | 27000 | 0.8559 | 0.895 |
| 0.0 | 37.0 | 27750 | 0.8682 | 0.8967 |
| 0.0 | 38.0 | 28500 | 0.8657 | 0.8983 |
| 0.0 | 39.0 | 29250 | 0.8354 | 0.8983 |
| 0.0 | 40.0 | 30000 | 0.8506 | 0.8983 |
| 0.0 | 41.0 | 30750 | 0.8341 | 0.8967 |
| 0.0 | 42.0 | 31500 | 0.8506 | 0.9 |
| 0.0 | 43.0 | 32250 | 0.8450 | 0.8983 |
| 0.0 | 44.0 | 33000 | 0.8494 | 0.9 |
| 0.0 | 45.0 | 33750 | 0.8478 | 0.9 |
| 0.0 | 46.0 | 34500 | 0.8491 | 0.9 |
| 0.0 | 47.0 | 35250 | 0.8495 | 0.9 |
| 0.0 | 48.0 | 36000 | 0.8491 | 0.9 |
| 0.0 | 49.0 | 36750 | 0.8507 | 0.9 |
| 0.0 | 50.0 | 37500 | 0.8519 | 0.8983 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k | LoftQ | 2023-12-20T21:46:39Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:2310.08659",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-20T17:10:40Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# LoftQ Initialization
| [Paper](https://arxiv.org/abs/2310.08659) | [Code](https://github.com/yxli2123/LoftQ) | [PEFT Example](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning) |
LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
This model, `LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k`, is LoRA fine-tuned from [LLAMA-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [GSM8K](https://huggingface.co/datasets/gsm8k) dataset.
## Model Info
### LoRA adapters
- rank: 64
- lora_alpha: 16
- target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"]
## Usage
**Inference** Here is an example code for inference after the model has been fine-tuned on [GSM8K](https://huggingface.co/datasets/gsm8k).
```python
import torch
from transformers import AutoModelForCausalLM
MODEL_ID = "LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16, # you may change it with different models
token=YOUR_HF_TOKEN,
)
# you can also merge the LoRA adapters to the backbone if you like
model = model.merge_and_unload()
# Do inference with `model` ...
```
See full evaluation on GSM8K on [Github](https://github.com/yxli2123/LoftQ/blob/main/test_gsm8k.py).
## Experiment Results
We have conducted experiments on supervised fine-tuning of [GSM8K](https://huggingface.co/datasets/gsm8k)
and [WikiText-2](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1).
| Model | Bits | Rank | LoRA Initial | GSM8K |
| -------------- | ---- | ---- | -------------------- | ----- |
| **LLAMA-2-7b** | 16 | 64 | Gaussian + 0 | 36.9 |
| LLAMA-2-7b | 4 | 64 | Gaussian + 0 (QLoRA) | 35.1 |
| LLAMA-2-7b | 4 | 64 | LoftQ | 35.0 |
## Citation
```bibtex
@article{li2023loftq,
title={Loftq: Lora-fine-tuning-aware quantization for large language models},
author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo},
journal={arXiv preprint arXiv:2310.08659},
year={2023}
}
``` |
Anhptp/opus-mt-it-en-BDS-G1 | Anhptp | 2023-12-20T21:44:14Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-it-en",
"base_model:finetune:Helsinki-NLP/opus-mt-it-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-12-20T21:38:38Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-it-en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-it-en-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-it-en-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-it-en](https://huggingface.co/Helsinki-NLP/opus-mt-it-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3644
- Bleu: 41.8225
- Gen Len: 9.2435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.5743 | 1.0 | 625 | 1.3644 | 37.7485 | 9.8785 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
cashdasmer/corgy_dog_LoRA | cashdasmer | 2023-12-20T21:40:01Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-12-20T21:24:23Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - cashdasmer/corgy_dog_LoRA
<Gallery />
## Model description
These are cashdasmer/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](cashdasmer/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
CarlBrendt/pegasus-samsum | CarlBrendt | 2023-12-20T21:31:04Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-19T15:47:42Z | ---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1537 | 0.11 | 500 | 2.0863 |
| 1.9761 | 0.21 | 1000 | 1.8822 |
| 1.892 | 0.32 | 1500 | 1.7913 |
| 1.8575 | 0.42 | 2000 | 1.7172 |
| 1.7898 | 0.53 | 2500 | 1.6785 |
| 1.7663 | 0.63 | 3000 | 1.6508 |
| 1.6782 | 0.74 | 3500 | 1.6347 |
| 1.7025 | 0.85 | 4000 | 1.6098 |
| 1.6642 | 0.95 | 4500 | 1.6034 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_base_sgd_001_fold1 | hkivancoral | 2023-12-20T21:22:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-20T18:51:35Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_base_sgd_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8964941569282137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2634
- Accuracy: 0.8965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5921 | 1.0 | 751 | 0.6125 | 0.7713 |
| 0.3591 | 2.0 | 1502 | 0.4405 | 0.8314 |
| 0.3047 | 3.0 | 2253 | 0.3783 | 0.8598 |
| 0.2551 | 4.0 | 3004 | 0.3472 | 0.8781 |
| 0.2903 | 5.0 | 3755 | 0.3280 | 0.8748 |
| 0.2606 | 6.0 | 4506 | 0.3152 | 0.8765 |
| 0.2793 | 7.0 | 5257 | 0.3058 | 0.8881 |
| 0.2466 | 8.0 | 6008 | 0.2984 | 0.8915 |
| 0.2613 | 9.0 | 6759 | 0.2938 | 0.8881 |
| 0.2309 | 10.0 | 7510 | 0.2878 | 0.8948 |
| 0.2362 | 11.0 | 8261 | 0.2856 | 0.8915 |
| 0.2432 | 12.0 | 9012 | 0.2826 | 0.8982 |
| 0.2244 | 13.0 | 9763 | 0.2806 | 0.8898 |
| 0.1581 | 14.0 | 10514 | 0.2786 | 0.8915 |
| 0.1869 | 15.0 | 11265 | 0.2757 | 0.8915 |
| 0.1863 | 16.0 | 12016 | 0.2739 | 0.8915 |
| 0.25 | 17.0 | 12767 | 0.2738 | 0.8898 |
| 0.1609 | 18.0 | 13518 | 0.2717 | 0.8932 |
| 0.1993 | 19.0 | 14269 | 0.2719 | 0.8881 |
| 0.1982 | 20.0 | 15020 | 0.2698 | 0.8915 |
| 0.1558 | 21.0 | 15771 | 0.2692 | 0.8982 |
| 0.1601 | 22.0 | 16522 | 0.2690 | 0.8998 |
| 0.1754 | 23.0 | 17273 | 0.2694 | 0.8898 |
| 0.1664 | 24.0 | 18024 | 0.2677 | 0.8932 |
| 0.1901 | 25.0 | 18775 | 0.2669 | 0.8948 |
| 0.2023 | 26.0 | 19526 | 0.2671 | 0.8965 |
| 0.1982 | 27.0 | 20277 | 0.2651 | 0.8965 |
| 0.2093 | 28.0 | 21028 | 0.2655 | 0.8998 |
| 0.1635 | 29.0 | 21779 | 0.2645 | 0.8982 |
| 0.1441 | 30.0 | 22530 | 0.2639 | 0.8998 |
| 0.1066 | 31.0 | 23281 | 0.2648 | 0.9015 |
| 0.227 | 32.0 | 24032 | 0.2644 | 0.9032 |
| 0.1777 | 33.0 | 24783 | 0.2648 | 0.8965 |
| 0.1846 | 34.0 | 25534 | 0.2641 | 0.8982 |
| 0.1694 | 35.0 | 26285 | 0.2643 | 0.8982 |
| 0.1711 | 36.0 | 27036 | 0.2639 | 0.8965 |
| 0.2649 | 37.0 | 27787 | 0.2640 | 0.9032 |
| 0.1541 | 38.0 | 28538 | 0.2641 | 0.8982 |
| 0.107 | 39.0 | 29289 | 0.2640 | 0.8932 |
| 0.2208 | 40.0 | 30040 | 0.2641 | 0.8998 |
| 0.1601 | 41.0 | 30791 | 0.2638 | 0.8948 |
| 0.1237 | 42.0 | 31542 | 0.2635 | 0.8965 |
| 0.1676 | 43.0 | 32293 | 0.2637 | 0.8932 |
| 0.1349 | 44.0 | 33044 | 0.2638 | 0.8965 |
| 0.1329 | 45.0 | 33795 | 0.2638 | 0.8965 |
| 0.177 | 46.0 | 34546 | 0.2636 | 0.8982 |
| 0.1141 | 47.0 | 35297 | 0.2635 | 0.8965 |
| 0.1694 | 48.0 | 36048 | 0.2636 | 0.8965 |
| 0.1549 | 49.0 | 36799 | 0.2634 | 0.8965 |
| 0.1223 | 50.0 | 37550 | 0.2634 | 0.8965 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
cyborcat/LunarLander | cyborcat | 2023-12-20T21:16:02Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-18T18:57:14Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.03 +/- 22.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thibaud/sdxl_dpo_turbo | thibaud | 2023-12-20T21:11:04Z | 434 | 85 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"turbo",
"stable-diffusion",
"stable-diffusion-xl",
"dpo",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2023-12-19T13:57:25Z | ---
license: other
pipeline_tag: text-to-image
tags:
- text-to-image
- turbo
- stable-diffusion
- stable-diffusion-xl
- dpo
widget:
- text: 'rusty robot cartoon'
output:
url: dpo_turbo_robot.jpeg
inference:
parameters:
num_inference_steps: 8
---
## Merge of SDXL Turbo & SDXL DPO
<Gallery />
Read their licences before using it.
## `*.safetensors` for AUTOMATIC1111, ComfyUI, InvokeAI
[Download *.safetensors file](https://huggingface.co/thibaud/sdxl_dpo_turbo/resolve/main/sdxl_dpo_turbo.safetensors?download=true)
## Use it with 🧨 diffusers
```python
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('thibaud/sdxl_dpo_turbo', torch_dtype=torch.float16).to('cuda')
image = pipeline('A mecha robot in a favela', num_inference_steps=2).images[0]
``` |
LoneStriker/CatPPT-base-4.0bpw-h6-exl2 | LoneStriker | 2023-12-20T20:38:45Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T20:37:03Z | ---
license: apache-2.0
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
``` |
bearbearyu1223/falcon_7b_LoRA_r16_dialogue_summarization_12_20_2023 | bearbearyu1223 | 2023-12-20T20:31:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:vilsonrodrigues/falcon-7b-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-sharded",
"license:apache-2.0",
"region:us"
] | null | 2023-12-20T16:59:39Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
datasets:
- samsum
base_model: vilsonrodrigues/falcon-7b-sharded
model-index:
- name: falcon_7b_LoRA_r16_dialogue_summarization_12_20_2023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_7b_LoRA_r16_dialogue_summarization_12_20_2023
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-sharded) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
MaVier19/zero-shot_text_classification | MaVier19 | 2023-12-20T20:31:01Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"base_model:finetune:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T11:30:48Z | ---
license: mit
base_model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: zero-shot_text_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zero-shot_text_classification
This model is a fine-tuned version of [MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6793
- Accuracy: 0.7785
- F1: 0.7798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9575 | 1.0 | 1000 | 0.6793 | 0.7785 | 0.7798 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
pszemraj/mGPT-Peter-mwe | pszemraj | 2023-12-20T20:27:29Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"multilingual",
"PyTorch",
"Transformers",
"gpt3",
"Deepspeed",
"Megatron",
"dataset:mc4",
"dataset:Wikipedia",
"base_model:ai-forever/mGPT",
"base_model:finetune:ai-forever/mGPT",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-28T22:54:47Z | ---
license: apache-2.0
tags:
- multilingual
- PyTorch
- Transformers
- gpt3
- gpt2
- Deepspeed
- Megatron
datasets:
- mc4
- Wikipedia
pipeline_tag: text-generation
widget:
- text: 'I know you''re tired, but can we go for another walk this evening?
peter szemraj:
'
example_title: walk
- text: 'What do you call an alligator who''s just had surgery to remove his left
arm?
peter szemraj:
'
example_title: alligator
- text: 'If you could live anywhere, where would it be?
peter szemraj:
'
example_title: dream living place
- text: 'What really makes you angry?
peter szemraj:
'
example_title: pet peeve
- text: 'My friend says that she knows every language, but she doesn''t speak any
of them.. what''s wrong with her?
peter szemraj:
'
example_title: language
- text: 'What would you change about yourself if you could?
peter szemraj:
'
example_title: change
- text: 'My first is in Asia, my second is in Europe, my third is in North America,
and my fourth is in South America. What am I?
peter szemraj:
'
example_title: continent
- text: 'Can you take me for dinner somewhere nice this time?
peter szemraj:
'
example_title: dinner
- text: 'Honey, I have clogged the toilet for the third time this month.. sorry..
peter szemraj:
'
example_title: overflow
- text: 'A man pushes his car to a hotel and tells the owner he''s bankrupt. Why?
peter szemraj:
'
example_title: brain teaser
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.4
no_repeat_ngram_size: 3
do_sample: true
top_p: 0.95
top_k: 30
temperature: 0.65
repetition_penalty: 3.5
base_model: sberbank-ai/mGPT
---
# mGPT: fine-tune on message data MWE
This model is a fine-tuned version of [sberbank-ai/mGPT](https://huggingface.co/sberbank-ai/mGPT) on 80k messages. Trained for one epoch, will be updated in a (separate) model repo later.
## Model description
- testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly
### Usage in python
Install the transformers library if you don't have it:
```
pip install -U transformers
```
load the model into a pipeline object:
```
from transformers import pipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_chatbot = pipeline('text-generation',
'pszemraj/mGPT-Peter-mwe',
device=0 if device == 'cuda' else -1,
)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pszemraj/gpt-neo-125M-magicprompt-SD | pszemraj | 2023-12-20T20:27:11Z | 21 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-09T02:56:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Gustavosta/Stable-Diffusion-Prompts
widget:
- text: morning sun over Jakarta
example_title: morning sun
- text: 'WARNING: pip is'
example_title: pip
- text: sentient cheese
example_title: sentient cheese
- text: cheeps are
example_title: cheeps
parameters:
min_length: 32
max_length: 64
no_repeat_ngram_size: 1
do_sample: true
top_k: 50
top_p: 0.95
repetition_penalty: 5.5
base_model: EleutherAI/gpt-neo-125M
model-index:
- name: gpt-neo-125M-magicprompt-SD
results: []
---
# gpt-neo-125M-magicprompt-SD
Generate/augment your prompt, stable diffusion style.
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the Gustavosta/Stable-Diffusion-Prompts dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8875
- perplexity: 6.6028
## Training and evaluation data
refer to the `Gustavosta/Stable-Diffusion-Prompts` dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2189 | 0.99 | 33 | 3.0051 |
| 2.5466 | 1.99 | 66 | 2.5215 |
| 2.2791 | 2.99 | 99 | 2.2881 |
| 2.107 | 3.99 | 132 | 2.1322 |
| 1.9458 | 4.99 | 165 | 2.0270 |
| 1.8664 | 5.99 | 198 | 1.9580 |
| 1.8083 | 6.99 | 231 | 1.9177 |
| 1.7631 | 7.99 | 264 | 1.8964 |
| 1.7369 | 8.99 | 297 | 1.8885 |
| 1.766 | 9.99 | 330 | 1.8875 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 | pszemraj | 2023-12-20T20:26:35Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"region:us"
] | summarization | 2022-08-30T22:44:50Z | ---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: false
base_model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 24.4101
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjhmM2NiMDQ1NjI3Zjk4YjkyMTVkMmUwZDU2YWMwZjc4ZmIzMjA1OGZiYzRmNjI3NDk3OWNmOTlkZDMxZmViMyIsInZlcnNpb24iOjF9.wS774e7vxQrf2gCcPhySsET3UaiUsj8E7mQmBS84wz86aT9j1yCqVX-8ozuj896K5wMygbL-TpUbydRIyyHTDw
- type: rouge
value: 5.003
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTYyNTFkNWZhOTgwNDg5ZWU5Y2U5NGI4Y2Y2YTMxNjUzOWI0ZWNlNDE1OGYzMjA1YTBmNDE4ZjcyOTZmODE4NiIsInZlcnNpb24iOjF9.AuqDkCgUgDWl8vMyrjTh59QW741UssGxdBqj3GZKy5e5gKadClUA709qgKbpxPIbMEyk38yvXYGplaJf5CnCCA
- type: rouge
value: 17.2544
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTBmODZmNWRhMzBhY2MzOGRkZWQzNjAzMGViOGMxYWYyZjNlZmM4YzgzMjkxNTk3M2E1ODAwZjY1M2I2MDZkYyIsInZlcnNpb24iOjF9.Md52aHjujvkxaW-ubJNquiHHHgi-OfRav0ZElVvYhIpU_k0iKEaQZRcw9JYjtG5vZJbQeiWbMzcCOJ999DhrAA
- type: rouge
value: 20.9183
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDJjNDc1OTZjY2VmNWRhNmYwZjRjY2JmNTAyNmIwZjRhYjMyMTdlNzY2M2Q4OGQwNTEyYTU0NGVhYWI2ZTk3NSIsInZlcnNpb24iOjF9.nlqol0HEeEjU7509-B9eyohf3CP3EZTibJ1lTvOx3wt8rU5LzEdwFazOTHjpWlcK_rik7jcySdUDe4fGjJtKAQ
- type: loss
value: 3.194674015045166
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzRiYmRiYjFkZDhlNGIwYTg3NDUwZTEzZjc5MjllNmJmODQ1YzBjNDM4MzQwNmMzNmNkMzk5N2M2MzZlOWY4MyIsInZlcnNpb24iOjF9._YJqPY9p_N2n7UxAkTeGenH1sVAkC_Z5HzZ6NbzlQoa8-RXTfbEPLw7fSKmlsGNyZxj7L_Bs4COIWzwAMxZSAA
- type: gen_len
value: 58.9951
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDJhYzU2Zjg4ZmIyOGRmNTU4MDM2NGZiNzc0NDk3YzZkOTQwMWMwNjMzZDQzZTZiZjk4ZDdmMmI2ODRkYjk3OCIsInZlcnNpb24iOjF9.MG1rcM_qpUhQmAYrsBxyNpcLUrPZw6V_uzYzDAo01kQyZEwJClWgMRVgpsSEnY93Mlu1445QLxkJEByUrfD3BQ
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- type: rouge
value: 37.3648
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWU4ZmZmYzllMzQxM2I4YTUxMjkwYjEzNDk1NjRlYjJiZjYyYWNiNzM4ODMxMGJjMzdhYjFhMzhlNTE5YmYyMiIsInZlcnNpb24iOjF9.9NTlO_5zLC8Y3mkwstviPb9WmMqPmXfWfEN0yONA6WYhh1jPy0gECEb5uF0G6wBMhTPDTqGMWOYIAF2vMeNbDA
- type: rouge
value: 12.3316
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTJhZTcxMDc5ODljMDBjYzFmYWIzNTA4M2NiZDUwYTMwNTVjZTUyZTU2M2IwYWE2YjkzMzMzMjg1MDU1OWE1NSIsInZlcnNpb24iOjF9.FRsoRao8qj6A8W7OeIVAoZCEc1HCZEzmKOs0CPkUceF19pk1ngaXt5K6kcPJ-5fYJydtfSuSnuG3aqlOEJeYDQ
- type: rouge
value: 22.075
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2FjNTMxMGYyNjgyNjk2YTQwZjM4MTM4Yjg0MTkyN2RmNDE5YTU5ZDNkZDFhZDM2YWRlNDI4M2JlMWYxNDQ3ZCIsInZlcnNpb24iOjF9.wsLUEYGJyMSJPPclOzb1hcRdE-VrZex2Sd5er_XVbe6bY1cRO5DdIn69sE9hmAcltefu4ikpHu2ihbv7qvj4Aw
- type: rouge
value: 31.1679
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUyODVkZGIxYzMyZDczNzU5YjVkNTliZmM4ZTdiYWE2ZjJhNGM3ZDgzMWE3ZjA2MDBhZWQ1ZGY1YzNmZDMwNiIsInZlcnNpb24iOjF9.fPgMnnXY5oPdCn1STZ0HwUiil8OlLZ8ZWZZav_chDIQ7Kh1RKeLy0EG2vEhrB6IlyP7uZ3RmdT9VHM1_khrEAw
- type: loss
value: 2.745267391204834
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ2NDVmODI2ZTQyNmVjZjRkZDdlMTdiODBkZTlkNTFkODBjNjViMTZhMDVkYTkwYWIyNDFkZWZhZmJhODEwMyIsInZlcnNpb24iOjF9.9JWTqdGEhztS--N8grHY6q2a8taVu65Lr17ocXgudp4imhqr9Bhau2X2G5SLN7c1oYieKtyKcWdDAmVzHyTbDw
- type: gen_len
value: 157.3126
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWNiODFmMWQ1ZTkzZGNjNDkwM2ZiZjJlZmQ3N2ExNWJhYmUxYWM2ZGNiYzlhYTY5Y2RhOGVlZDhmN2ZmODQwYSIsInZlcnNpb24iOjF9.sRA9iBS4vzFDZtwM4Vs6Kevj3eiTkS5akApUWTZBCt58YSW8mpoKqsWcnQFEjDCCec-FfV_451OLIetcmDZiCA
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 18.2975
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJhMjQ0Yzc4ZmNkOWI5YjhmOTlmOTA4MTE0NWM4NGRlNjE0NDIwOTY2ZmQyNjA0ZmE5MjM2YjAyZDZiNWFkNiIsInZlcnNpb24iOjF9.2UJ48OcezjnfMC0dGjksZpAiXRGNAOHniHdN-tQmQPo0vXwRYNTyPrVULnVoBZUvSdycTYvjl0jDKNhZmtGfCA
- type: rouge
value: 2.6806
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTlkMmQwZTRmN2JlOTQ0N2I0YjdhOTBmYmU3MzEwNzE2ZjFiOTM4OWMyMWRhNmZjNTBkZWY5OGMwYTZhZDRhYSIsInZlcnNpb24iOjF9.7D-IR1aBxx1goOkbeA3Tzd1Wu0Zfi0yQVSG8HWSboM7J67TBHblFsFCVJE7Z2wZRbBW4WtuDIGAcl1d1_Wu_Aw
- type: rouge
value: 11.9453
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGZjNmY5NmU5ODBmMDQyMjhhNzY3NzBlNDEyMTE3NjY1ZmRkZDZkZWI1YTA0ZTA0NzU1MjMzOTNjZDA3YWM1MCIsInZlcnNpb24iOjF9.SlI42pwrWc_OlcBKOPtrYNzvK_DUk6IJlzrrtjvkZX7k1S7bguekAV-_rWHfn_82k8rJ1FQAReasGHu1dZ0aBw
- type: rouge
value: 14.2121
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E2MGE0MTQ1YmU0MTJkOWY3ZDhhODIwYWNhNTE3YWJkZTFhYzM1ZjBmNGExODIzYmU2YzE1ODg4ZjdhZWMwMiIsInZlcnNpb24iOjF9.K5FEsZtSph0FqF5zwetkE-X5AKOlj5g_02DPdl-kEe1azKrBBZy9sDiS0WfIGfwHiRdNvKGKi8t3PAGPsfQwCQ
- type: loss
value: 4.836681365966797
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzhlYjA0YzZmYjdmYWQwNDFhNzIzYWNkYzM4OGFlOWJiY2EzYTkxYjk3ZmJmNGQyMGE1ZmYzMDU2MzhhMmVkMiIsInZlcnNpb24iOjF9.uHYwqPBg6K63exBvqt__c82gKi52OhPTRSrcIKHOECCmoXJLJKgFJCuIXGWMJ7UP4HG375e9uqunJB0XwC20DA
- type: gen_len
value: 96.2584
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjNjYzQzNmM5NTY2YzVhNzRkZjMxMzhiYTU1MDBiOGZkYjA4YTg0MmQzYzQ3YTk3N2YwMDA5MWNlM2Y4YTFmZiIsInZlcnNpb24iOjF9.dirG9kG6OdNi-YEMWHv0UMrHTjEt6VS9i6fRbbUeZd1OoP2fl6XcKoDIk6Us-cdiyVnCyyhWsMNsUufMAqLtDA
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 37.3609
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGExYjM5ODRiNThlZTU4ZTdhM2ZlZWRlNTgzNzc3N2ZjODk2ZjdlOGZlMDkzNmU2Yjk1NzQzZjQ5YzkwODllMCIsInZlcnNpb24iOjF9.JQIeaQkG-IlinWoyc6FKJZUgpWfqOsDhludqm5MgVsw68gsjo0nSPp_Y_1q26Y4dulZOLlQLyBAm3mlCA8s5Ag
- type: rouge
value: 8.6943
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWFjNzJkMzViOGM5YWQ0OGQ4ZTg3NTE5MzU1MjZkZjZiZmVkYTk0ZDhkYjAxMjZiZDVkZTYyYjk4MzRjNTQ3YiIsInZlcnNpb24iOjF9.9XJZ2UF6XyZNNrtp-XOEXC6etoDOFLq1xlIoMFEM9Jinisq3kWguXBiqPQWImLKra5WBm7jU_QIX-Fvn8sP-DA
- type: rouge
value: 17.9106
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ1MTdmNThiM2FiOGRmZWRmOTNlYWMwYTU1YjRiNTRlMGEwYjBmMmQ0YjQ4MDBhNzMzZmZkNjk3NjU0YzRhMSIsInZlcnNpb24iOjF9.040nGV6pig0Rzq9vkN83ZVWQzyjcVi13L36v0QF-Nhziol_dPPhuvghTlGWXWHwj6amsKzyh8M7rNfwL2TcsAQ
- type: rouge
value: 33.8022
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDYwOGRmYzg4ODc2NDExNjhhMjI5MDg3MjI0YTQ5ZDRhM2NjN2Q2ZjM5YTIwZDIxNmY3Y2JlMmMxYTE5MDE4ZiIsInZlcnNpb24iOjF9.S1nynUjLz7z4gf-0WFfPs-ZuZubhN9kXyVSrYNzOdT2gTJmByQWasKreZkVSWus-HNAHR8DhzL6UUWxuDMmAAQ
- type: loss
value: 3.4974069595336914
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzkyNmU5NTRhMTkxNjA1M2E1MjdiMTE0MzQyMDc4ODBkNmM1NDg1ZDk4OTNjODk2MThlZGZiYzQxOGE1YzgwMiIsInZlcnNpb24iOjF9.H9Oo0VKvcqAHcVNvjeEPEhQe5HP0v614suyCv75tfFGaPSKTIe3UlBNDdGOtqfUxb2zUNaBQ8MkA66C_Fkq6CA
- type: gen_len
value: 243.3453
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWE1MGQzMDc2NDViOGM5ZmVkZjk0NmY0NzliOTBhMmE3NmY5MmUxMTI3NGE2OTQzM2Y1NjdmN2NlZGFlODFlYiIsInZlcnNpb24iOjF9.635fcTp_czTabJUVR_dwpzdkntb4cxEbODAC9MMTKrLKEf9NHqDBJXQ-nBOieW05iCSYzw_tEi8O-QW-sRxDAw
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 35.2043
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTRlZTdjZDRlZGMxYzA2NmRkYjBiMzZkY2Q1ODUyYjJkM2QwOTRmMzA3ZmU5MDI5ZmM1MmZkZDUwNzc0NjhmNyIsInZlcnNpb24iOjF9.zrskApkmkhbfQLtlgjf_n6i3WmZcmkDH7Sd-JTzOYAU3yk1_Zl4paGdmpXvyQY48M71qWsBYtEKkhnzrkvCGBA
- type: rouge
value: 5.746
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2FlMjU2MzU1MTljZjM0ZmFhMmJlZDAxMTcwZDk3YWE5NjVjYjE0YmEyMTgzY2UyMTVmZDY5ZWM1YmM1ZDA5NSIsInZlcnNpb24iOjF9.5nDuOwa98pon3VW1TazB2Vw1uJgh6pfFMorzgLMJFvhgwYz6_MvLR1dDUeffP4eyw7rGZjBmf039AM7CyKEgCg
- type: rouge
value: 15.6794
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjRmYzk3NWFhZDVlODA4YWRiMDU1ZWFhZmMwMWE4MmNkNmNjZWM3ZjUwYzI3MWIxM2Y4MTlhZDk2ZTg5YjkyYSIsInZlcnNpb24iOjF9.TLflM2CYNgz4DNt-TwjgdkTL8ebKckTNnlPVsGLUUGqNI1CvSswzsPedqmntCfKVsH2YAsKsR4ZUb1HtJFsSAw
- type: rouge
value: 32.1129
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzdhNWE1YjRjNGUzYWYyNzM4MjIyYThiODJhODU2OGVlOTYxOGNhZmQ4Mjk2ZDUwNmU0MGQwNjQ5NTk2MzU4ZiIsInZlcnNpb24iOjF9.5yvTmPktBuyzoVNHn7UHcci3OrZLTm7e9d_lQkJq8UwzUuso1wHoy_gdvnvpn2DvUfdcBi5sXgG4mtFnVnGgBw
- type: loss
value: 2.945225238800049
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgxNGRiN2RkMzQ5MjI2OGI0MTljZTY5ZDQyMzc5MjhmNzdhZWQ2NmJhYTgzOTRlMGY2YzkzZWE2NzVkYzVmNCIsInZlcnNpb24iOjF9.VkkP4-S6ZoozLj-iuY7tdsrSR0q1JLQXfgPv_0u2sJuv6x9RYMdCpfJHbqYbirV63b9w28USSwaAAMnz-LoJAA
- type: gen_len
value: 307.5493
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmQ1YTgxYmRhYWViYjhhNmYzNjdlYzVhMTNmZTBkY2RiOTRlMTUzNTIzY2RjOTNhMjRmNGRmYjQyNTBmZWRiMiIsInZlcnNpb24iOjF9.7ItU-AQXB4EEj9U9kJceteBQbA5MkZoegeLhCdpZepEaXzqr6Zg3yHLCD9zL_6Svb9uxuin678KOT5Zf-2YWCQ
---
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
> Evaluating some metric results before merging with the "main" wip version
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12) on the `kmfoda/booksum`.
The "base" checkpoint that I update when a training session is productive is [here](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1.1
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17 | pszemraj | 2023-12-20T20:26:03Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"base_model:pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16",
"base_model:finetune:pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"region:us"
] | summarization | 2022-09-26T23:21:47Z | ---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
datasets:
- kmfoda/booksum
inference: false
base_model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
results:
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 36.8427
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODY2YTBkOTUxZjRlOWYwNWI1OWM2ZDMwOTFjMGIzNjc5YzgwMWE2ZDJiNzY5YTZlZTZiZmNlYzNiYjc5NWZiMiIsInZlcnNpb24iOjF9.Jf41H9W-V6vbLXloL3XUtvKG2Uieoeijzean8Ns4AKRgX6OMeAaWpqoOG4Umpb1JsjtXvbSYdqwTlQVm0IAABQ
- type: rouge
value: 8.4234
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY2MzQ1OTM1ZDM1OGU0OTliZGQ0Y2QzZDExZWQ2NzAwNzlhZThjYjc3ZmU1NDZiNTZjZmZiMTA1NjlhNGU1YyIsInZlcnNpb24iOjF9.etPfWSu1mxR5RN-9rq_F5FFi0IXPe81yGZWbpb6yDzZjAoiSTq4RCmaEUlt8JFXkoLJS3pP9JgjSr7Cg4dl2CQ
- type: rouge
value: 17.774
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjRkYTcyMzJiNzY5NWMwYzg2YmFkYTZiMzgxMzJjZDcwOTg3MWZmNTk3OTYzNzFkOGEyNTg4NmU0MjJlZDRlOCIsInZlcnNpb24iOjF9.Cd4LtEHKbEp-n42rDJb7KFqNlCUBKgCTz8sTNgkZVYJqY-rV5JGZtDz5mawNSbJTMn7rNnBNmaU4V99MGQyiBw
- type: rouge
value: 33.2901
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTI4M2RhNzQ5OWM4ZmExZjU1NDU0MDliZTM1YTgxODczMTFjNWFiNmJlNjc1YzEyY2FjZTJiNmFiZmNjNTc2MyIsInZlcnNpb24iOjF9.b52vREVLI3DgfDkku8lzi2KDWLiN9TvNENCjFAKmrifMDxpsjTPGn6qf0csvU6_kgjWkHKFO53VkLr-XFazPBg
- type: loss
value: 3.765686511993408
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTJjNzU1OWEwOWZiYzM2Zjk0ZjZhYmRmYjczMTJlZGFjOTNjZTY4Mjg3NTRiMTAzN2NlYTEyNjU2YWY5M2E5NiIsInZlcnNpb24iOjF9.Q4FFH8cbGLzaZWJUrSKeZl1os7h9S12v8a__oIoeeWL-c9GXVyNdtb5q-eb7r-4G5i9ytBc9NM6n90nuO353Aw
- type: gen_len
value: 213.8849
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTg3NGMwMTY0M2Y0M2JiOTFhNTQ2ODgxNzZjNTAwNjI4YmRhZTllNTU5ZjE5OGE2Y2EwZmI2YTQ3ZTQxNTFkNiIsInZlcnNpb24iOjF9.8yc25qbswrqJa56hlM1vvlD-Re7R1n3Q9_3U4c9OEzC9XIf8ir3zUQOrEZUb9vm5_H9a8QoiEXUcZG6Bq4fTAQ
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 35.4324
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTkyOTU5M2ZiYzc4NzU0Nzg3NzI2MDE3MTVkZGRlYzVlNWFlYzJlYjA3ZGNhYzIyYmM1NzVkZWEzMTRhMGRhNCIsInZlcnNpb24iOjF9.TGS-ZF3MKg0cbgpPm2wz7Y8KarRGvBNyfaaDHFpUesYCR5pcz3a_ojRAGXOTIek-fcS--ZvADjEz8by9GYBOAQ
- type: rouge
value: 5.9586
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTIzZGRkMTc3MmUxOGRhMzI2NjAzNGYxYjY4YTM5N2QxNDJiZTJlMmRiNzY3NTFmZDg2NzAzMWI1ZTA5ZmY4YiIsInZlcnNpb24iOjF9.1fyZffIo-wDg85krXWGgc90SlFLIU_v7URS-14zNEHZSe4kmbcdGmW963WKAEI2v2oRXU7uQ3BsgDS3d30KzDQ
- type: rouge
value: 16.134
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODI4Mzc1YTY3ZDBmYTMxZTJkMmU3YWI4OGE3NGVkODk1NDQ5NThlZTM0NmM5ZGEwODFjZWI5ODQ5YzAxYmMzOCIsInZlcnNpb24iOjF9.KzQLzOXFjJv_tRzKPnkp2AA_8u_aZtI2GQQeavB3L4ksmX-aOnlVu9-fXktfOCiXmmJCbyZfS3uicdyLgqyhBw
- type: rouge
value: 32.4141
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTFhNDQ1Y2YyMThjMzBmY2EzM2MxNzNkYTE1MDYxNzAwOWQxNDdlNjljZWE4ZWRiZmUxODkyOGM3ZDZiN2I3NyIsInZlcnNpb24iOjF9.YYxCtJlNax15r4oY_IikY1MmaU05WCD_JtTMKt5Jdb9Tco2KOPQ9z6Vc6AlGEJNaXVNRm-ROS7CKCDkC55B_Bg
- type: loss
value: 3.050943374633789
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDllNjgzMWZjNzNhZmFlZWQ4OTY3MTVjY2FkOGEzYjFkOGNhNzM3MjhhNTUyNWI5ODhhNTk2MDhlODNhNGMxOCIsInZlcnNpb24iOjF9.5S2y4SbzKWu6BHlnyUH2R9jwO780INnzqQbdKHXizkJKvX8g9qpuYB0Iu41e1aWqmePdY0JbVUqhG3Xfo2otBA
- type: gen_len
value: 279.8735
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmRhZTY2MDMwMTk2ZGFjNTJhN2Q4NmNjYjRjNjgzMzljNzBiYjEwNTgzZGNhYmY5MGNhYTY2MzE0OTlhNGNhZSIsInZlcnNpb24iOjF9.epjIxFmPlfDHJc--eJIo8AgnkjQBDLokICjxVqwyHiE6T0Hlj8D69RhOplEIDwMQyXC5usfkF0zW7ib8JuhyCg
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- type: rouge
value: 38.066
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGJlYjZhMDIzZTAyNzU3MGFhYmIwZTBjNTViZTQ5MjFjODcyNTUzNDg5MWVlNzMxZTQ0NjA5ZjJlYWYxZDk4ZiIsInZlcnNpb24iOjF9.g-Ppx-hZPJBQM160VSDZWLFt0WEv5WbBiOpwQlbFnQ12QSezZiu-NR2wsaZeNLIVWaeEDVTCVpVcHf0-YymYAA
- type: rouge
value: 12.5289
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjlkNjY3OGE4YWIxMjEzZmQyMDg5ZjMxNjhhMzBlMDQ1NDgwZGQ0NWUyYmFhZTY0Mzc4MWQ0NTJjZmE4MmZiOCIsInZlcnNpb24iOjF9.X-rXBFAXTJXVmihkVHdqdpm6QCHbuI4Ligd2SsmvVcpMux6ep2EoBKd4xuTW4WCr6Qjsa7tZ7kJM-1pu9kKSDw
- type: rouge
value: 22.3393
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2E1NGM1YmMyYTc5NTkxNzU3ZWM4OGRmNDBhNDdlZjYwZjBjNWNkNmJkMjkyMDkzNDBlMGIyZDg4ZjBlYTQ3OCIsInZlcnNpb24iOjF9.vZi95CQMrkdETfhQjjgoO2WkpM3Fr4NZCTX7S9q3TnsC9J9KELfcNNXq7rtbWgQndUK74AvBt7G6nG7Qj13nBw
- type: rouge
value: 31.9802
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzU3ZTIzZDhiYjc1ODk2ODg4NTI2MDFhZWFhMDRmMTg2OTg0MzkyMjQ0NDkyODI0ZTE1MmM2MzNiODQ2Y2EzZiIsInZlcnNpb24iOjF9.k48PRPLAGKPT-ILO5HbPciwFG9vdR6_ICvUXmOnJI4mz5dIoBLvR0aTdWCU070jyPveDwXisIvE9scK9jWsUCA
- type: loss
value: 3.0360958576202393
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgzOGRlNmMwZjYyMzNkNjcwNDM4MTYyZjgzODhjYTdhY2JiNWY4ZjMzNWJhZjc1YjNiYjViZDk2ODMzMmI5ZiIsInZlcnNpb24iOjF9.dH1fJs84sTWXqrmdsCMuc6zexedn0uUWd9gmVV2JKzFzpPbTxzIJSNez7jaGz_sgSK8q-AeclWFrBAgPDnM6Bg
- type: gen_len
value: 161.4671
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjc5NGM4OWU5ZDM2YTZhZmM2OTgwY2ZiODRmYzE2MWRlMTVhZDBjZTQ3ODJkNjU4NzdkZGZlZDk1YjdkYmE0OCIsInZlcnNpb24iOjF9.OSzFnK9k7IT0cv2qXSVzgjTVLkxkqYnUI9OQqPcoEjBK8nqY0OdMQ8BWq6CN6rt6VmVk111B0TJJCTEfseiHBg
---
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16) on the kmfoda/booksum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 3
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
iamhassaan/controlnet_sd_apparel_NVIDIA_A10G | iamhassaan | 2023-12-20T20:16:51Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-12-20T18:11:17Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-iamhassaan/controlnet_sd_apparel_NVIDIA_A10G
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: new knitwear design based on sketch, 100% cotton, red colour, make it photorealistic

|
HaythamB/distilbert-base-uncased-finetuned-squad | HaythamB | 2023-12-20T20:12:13Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-12-20T20:11:54Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6775 | 0.23 | 500 | 1.5964 |
| 1.6083 | 0.45 | 1000 | 1.3869 |
| 1.4495 | 0.68 | 1500 | 1.2649 |
| 1.3425 | 0.9 | 2000 | 1.2112 |
| 1.2108 | 1.13 | 2500 | 1.1996 |
| 1.1106 | 1.36 | 3000 | 1.1464 |
| 1.0877 | 1.58 | 3500 | 1.1472 |
| 1.0493 | 1.81 | 4000 | 1.1132 |
| 1.0196 | 2.03 | 4500 | 1.1237 |
| 0.8958 | 2.26 | 5000 | 1.1212 |
| 0.9042 | 2.48 | 5500 | 1.1219 |
| 0.8821 | 2.71 | 6000 | 1.1013 |
| 0.8943 | 2.94 | 6500 | 1.1010 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
winddude/mamba_financial_headline_sentiment | winddude | 2023-12-20T20:07:20Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"finance",
"text-classification",
"en",
"dataset:winddude/finacial_pharsebank_66agree_split",
"dataset:financial_phrasebank",
"base_model:state-spaces/mamba-2.8b",
"base_model:finetune:state-spaces/mamba-2.8b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T19:29:29Z | ---
license: apache-2.0
datasets:
- winddude/finacial_pharsebank_66agree_split
- financial_phrasebank
language:
- en
base_model:
- state-spaces/mamba-2.8b
metrics:
- accuracy
- f1
- recall
- precission
model-index:
- name: financial-sentiment-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_66agree
metrics:
- name: Accuracy
type: accuracy
value: 0.82
- name: Percision
type: percision
value: 0.82
- name: recall
type: recall
value: 0.82
- name: F1
type: f1
value: 0.82
pipeline_tag: text-classification
tags:
- finance
---
# Mamba Financial Headline Sentiment Classifier
A sentment classifier for finacial headlines using mamba 2.8b as the base model.
Text is classified into 1 of 3 labels; positive, neutral, or negative.
## Prompt Format:
```
prompt = f"""Classify the setiment of the following news headlines as either `positive`, `neutral`, or `negative`.\n
Headline: {headline}\n
Classification:"""
```
where `headline` is the text you want to be classified. |
originstory/holisleigh2 | originstory | 2023-12-20T20:03:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-12-20T20:03:57Z | ---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: "a funky house with 80s hip hop vibes"
example_title: "Prompt 1"
- text: "a chill song with influences from lofi, chillstep and downtempo"
example_title: "Prompt 2"
- text: "a catchy beat for a podcast intro"
example_title: "Prompt 3"
---
# MusicGen - Small - 300M
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [**small** (this checkpoint)](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-small")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("small")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
LoneStriker/CatPPT-6.0bpw-h6-exl2 | LoneStriker | 2023-12-20T19:55:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"text-generation",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:rishiraj/CatPPT-base",
"base_model:adapter:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-12-20T19:52:40Z | ---
license: apache-2.0
library_name: peft
base_model: rishiraj/CatPPT-base
datasets:
- HuggingFaceH4/no_robots
language:
- en
widget:
- text: |
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
output:
text: >-
Aye, me hearties! 'Tis not likely a human can eat a helicopter in any sittin', let alone one! They be too big and made of metal, and not fit for consumption. But if ye be referrin' to helicopter snacks, like nuts and trail mix, then a human might be able to munch a goodly amount in one sittin'. Arr!
tags:
- generated_from_trainer
pipeline_tag: text-generation
model-index:
- name: CatPPT
results: []
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
``` |
hour3/ppo-LunarLander-v2 | hour3 | 2023-12-20T19:53:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-20T19:52:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.51 +/- 13.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
badokorach/distilbert-base-uncased-finetuned-squad | badokorach | 2023-12-20T19:51:54Z | 44 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-03-14T16:55:16Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 47 | 3.7022 |
| No log | 2.0 | 94 | 3.2578 |
| No log | 3.0 | 141 | 3.1677 |
| No log | 4.0 | 188 | 3.1411 |
| No log | 5.0 | 235 | 3.2288 |
| No log | 6.0 | 282 | 3.2640 |
| No log | 7.0 | 329 | 3.3689 |
| No log | 8.0 | 376 | 3.4776 |
| No log | 9.0 | 423 | 3.5919 |
| No log | 10.0 | 470 | 3.6319 |
| 2.2819 | 11.0 | 517 | 3.8199 |
| 2.2819 | 12.0 | 564 | 3.9339 |
| 2.2819 | 13.0 | 611 | 3.9348 |
| 2.2819 | 14.0 | 658 | 3.9293 |
| 2.2819 | 15.0 | 705 | 4.0312 |
| 2.2819 | 16.0 | 752 | 4.1651 |
| 2.2819 | 17.0 | 799 | 4.0788 |
| 2.2819 | 18.0 | 846 | 4.1112 |
| 2.2819 | 19.0 | 893 | 4.1513 |
| 2.2819 | 20.0 | 940 | 4.1676 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/CatPPT-5.0bpw-h6-exl2 | LoneStriker | 2023-12-20T19:48:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"text-generation",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:rishiraj/CatPPT-base",
"base_model:adapter:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-12-20T19:46:26Z | ---
license: apache-2.0
library_name: peft
base_model: rishiraj/CatPPT-base
datasets:
- HuggingFaceH4/no_robots
language:
- en
widget:
- text: |
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
output:
text: >-
Aye, me hearties! 'Tis not likely a human can eat a helicopter in any sittin', let alone one! They be too big and made of metal, and not fit for consumption. But if ye be referrin' to helicopter snacks, like nuts and trail mix, then a human might be able to munch a goodly amount in one sittin'. Arr!
tags:
- generated_from_trainer
pipeline_tag: text-generation
model-index:
- name: CatPPT
results: []
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
``` |
Bo-Ni/ProteinMechanicsDiffusionDesign_BetaTest | Bo-Ni | 2023-12-20T19:47:33Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-12-20T16:52:00Z | ---
license: apache-2.0
---
# ProteinMechanicsDiffusionDesign
## End-to-end de novo protein generation based on nonlinear mechanical unfolding responses using a protein language diffusion model

### Summary
Through evolution, nature has presented a set of remarkable protein materials, including elastins, silks, keratins and collagens with superior mechanical performances that play crucial roles in mechanobiology. However, going beyond natural designs to discover proteins that meet specified mechanical properties remains challenging. Here we report a generative model that predicts protein designs to meet complex nonlinear mechanical property-design objectives. Our model leverages deep knowledge on protein sequences from a pre-trained protein language model and maps mechanical unfolding responses to create novel proteins. Via full-atom molecular simulations for direct validation, we demonstrate that the designed proteins are novel, and fulfill the targeted mechanical properties, including unfolding energy and mechanical strength, as well as the detailed unfolding force-separation curves. Our model offers rapid pathways to explore the enormous mechanobiological protein sequence space unconstrained by biological synthesis, using mechanical features as target to enable the discovery of protein materials with superior mechanical properties.
### Installation and use
A [colab notebook](./notebook_for_colab/pLDM_inferring_standalong_colab.ipynb) is included in this repository that feature the trained model. <br />
<a target="_blank" href="https://colab.research.google.com/github/lamm-mit/ProteinMechanicsDiffusionDesign/blob/main/notebook_for_colab/pLDM_inferring_standalong_colab.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
To install and run, just clike the "open in Colab" badge and run all.
Note that, the first run may take time as it includes downloading and installing packages and the trained model.
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError0.4_Seed104 | behzadnet | 2023-12-20T19:11:46Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-20T19:11:43Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError0.4_Seed104 | behzadnet | 2023-12-20T19:11:37Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-20T19:11:31Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Matvey67/distilbert-base-uncased-finetuned-emotion | Matvey67 | 2023-12-20T19:08:35Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T17:30:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9260809028404356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7777 | 1.0 | 250 | 0.2958 | 0.9165 | 0.9163 |
| 0.2393 | 2.0 | 500 | 0.2123 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Tamakoma/CowboyHatEmoji | Tamakoma | 2023-12-20T19:07:52Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-large-22k-384",
"base_model:finetune:facebook/convnextv2-large-22k-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-24T14:42:12Z | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
base_model: facebook/convnextv2-large-22k-384
model-index:
- name: outputs
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.7777777777777778
name: Accuracy
---
# Cowboy Hat emoji 🤠 (Western)
This model is a fine-tuned version of [facebook/convnextv2-large-22k-384](https://huggingface.co/facebook/convnextv2-large-22k-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4250
- Accuracy: 0.7778
## Model description
When you want to know if an art is 🤠 or not 🤠.
- Current iteration: v3.5 (Continuous Image Integration)
## Wait, why?
gelbooru contains a lot of images, however not all of them are in the same region as south eas asia. As such, to filter out such images we have created a classifier that in theory learns the differences between western (USA/Europe/etc.) and not western (Japan/China/SEA).
The definition of "Not Western" is limited to the the asian region (Japan, Korea, China, Taiwan, Thailand and the surroundign region). The author believes that the art is similar enough with the same "style" which he personally prefers over western art.
## Intended uses & limitations
filter gelbooru data on 🤠 or not 🤠
## Training and evaluation data
Selected 358 images of 🤠 and not 🤠.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 802565
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7384 | 1.0 | 152 | 0.4268 | 0.7963 |
| 0.2888 | 2.0 | 304 | 0.4250 | 0.7778 |
| 0.2953 | 3.0 | 456 | 0.4250 | 0.7778 |
| 0.4914 | 4.0 | 608 | 0.4250 | 0.7778 |
| 0.4099 | 5.0 | 760 | 0.4250 | 0.7778 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Lau123/distilbert-base-uncased-detect_ai_generated_text | Lau123 | 2023-12-20T19:06:51Z | 31 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T19:00:36Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-detect_ai_generated_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-detect_ai_generated_text
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0089
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.2078 | 1.0 | 47 | 0.0219 | 1.0 | 1.0 |
| 0.0153 | 2.0 | 94 | 0.0089 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.0
|
HarshalBhg/gte-large-setfit-train-test2 | HarshalBhg | 2023-12-20T18:59:27Z | 5 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:dvilasuero/banking77-topics-setfit",
"arxiv:2209.11055",
"base_model:thenlper/gte-large",
"base_model:finetune:thenlper/gte-large",
"model-index",
"region:us"
] | text-classification | 2023-12-20T18:58:29Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- dvilasuero/banking77-topics-setfit
metrics:
- accuracy
widget:
- text: I requested a refund, and never received it. What can I do?
- text: I have a 1 euro fee on my statement.
- text: I would like an account for my children, how do I go about doing this?
- text: What do I need to do to transfer money into my account?
- text: Which country's currency do you support?
pipeline_tag: text-classification
inference: true
base_model: thenlper/gte-large
model-index:
- name: SetFit with thenlper/gte-large
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: dvilasuero/banking77-topics-setfit
type: dvilasuero/banking77-topics-setfit
split: test
metrics:
- type: accuracy
value: 0.9230769230769231
name: Accuracy
---
# SetFit with thenlper/gte-large
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [dvilasuero/banking77-topics-setfit](https://huggingface.co/datasets/dvilasuero/banking77-topics-setfit) dataset that can be used for Text Classification. This SetFit model uses [thenlper/gte-large](https://huggingface.co/thenlper/gte-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [thenlper/gte-large](https://huggingface.co/thenlper/gte-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 8 classes
- **Training Dataset:** [dvilasuero/banking77-topics-setfit](https://huggingface.co/datasets/dvilasuero/banking77-topics-setfit)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'The money I transferred does not show in the balance.'</li><li>'I was wondering how I could have two charges for the same item happen more than once in a 7 day period. Is there anyway I could get this corrected asap.'</li><li>'What is the source of my available funds?'</li></ul> |
| 0 | <ul><li>'Do you support the EU?'</li><li>"Can you freeze my account? I just saw there are transactions on my account that I don't recognize. How can I fix this?"</li><li>'Please close my account. I am unsatisfied with your service.'</li></ul> |
| 5 | <ul><li>'Are you able to unblock my pin?'</li><li>'I can not find my card pin.'</li><li>'If I need a PIN for my card, where is it located?'</li></ul> |
| 1 | <ul><li>"I can't get money out of the ATM"</li><li>'Where can I use this card at an ATM?'</li><li>'Can I use my card at any ATMs?'</li></ul> |
| 3 | <ul><li>'Can I get cash with this card anywhere?'</li><li>'Can you please show me where I can find the location to link my card?'</li><li>'Am I able to get a card in EU?'</li></ul> |
| 6 | <ul><li>'My friends want to top up my account'</li><li>'Can I be topped up once I hit a certain balance?'</li><li>'Can you tell me why my top up was reverted?'</li></ul> |
| 7 | <ul><li>'How do I send my account money through transfer?'</li><li>'How do I transfer money to my account?'</li><li>'How can I transfer money from an outside bank?'</li></ul> |
| 4 | <ul><li>'Do you work with all fiat currencies?'</li><li>'Can I exchange to EUR?'</li><li>'Is my country supported'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9231 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HarshalBhg/gte-large-setfit-train-test2")
# Run inference
preds = model("I have a 1 euro fee on my statement.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 10.5833 | 40 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 19 |
| 2 | 28 |
| 3 | 36 |
| 4 | 13 |
| 5 | 14 |
| 6 | 15 |
| 7 | 21 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0026 | 1 | 0.3183 | - |
| 0.1282 | 50 | 0.0614 | - |
| 0.2564 | 100 | 0.0044 | - |
| 0.3846 | 150 | 0.001 | - |
| 0.5128 | 200 | 0.0008 | - |
| 0.6410 | 250 | 0.001 | - |
| 0.7692 | 300 | 0.0006 | - |
| 0.8974 | 350 | 0.0012 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
SkyR/roberta-base-ours-run-2 | SkyR | 2023-12-20T18:57:14Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-12T15:59:44Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: roberta-base
model-index:
- name: run-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1449
- Accuracy: 0.75
- Precision: 0.7115
- Recall: 0.7093
- F1: 0.7103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9838 | 1.0 | 50 | 0.8621 | 0.645 | 0.6536 | 0.6130 | 0.6124 |
| 0.7134 | 2.0 | 100 | 0.8124 | 0.7 | 0.6628 | 0.6421 | 0.6483 |
| 0.4911 | 3.0 | 150 | 0.8571 | 0.7 | 0.6726 | 0.6314 | 0.6361 |
| 0.3104 | 4.0 | 200 | 0.8228 | 0.76 | 0.7298 | 0.7367 | 0.7294 |
| 0.1942 | 5.0 | 250 | 1.1132 | 0.76 | 0.7282 | 0.7031 | 0.7119 |
| 0.1409 | 6.0 | 300 | 1.2218 | 0.685 | 0.6516 | 0.6560 | 0.6524 |
| 0.0976 | 7.0 | 350 | 1.3648 | 0.715 | 0.6984 | 0.7044 | 0.6946 |
| 0.0791 | 8.0 | 400 | 1.5985 | 0.745 | 0.7183 | 0.7113 | 0.7124 |
| 0.0647 | 9.0 | 450 | 1.8884 | 0.725 | 0.6818 | 0.6761 | 0.6785 |
| 0.0275 | 10.0 | 500 | 1.8639 | 0.725 | 0.6979 | 0.7008 | 0.6958 |
| 0.0329 | 11.0 | 550 | 1.8831 | 0.72 | 0.6816 | 0.6869 | 0.6838 |
| 0.0169 | 12.0 | 600 | 2.1426 | 0.73 | 0.6864 | 0.6776 | 0.6794 |
| 0.0072 | 13.0 | 650 | 2.2483 | 0.725 | 0.7187 | 0.7054 | 0.6968 |
| 0.0203 | 14.0 | 700 | 2.2901 | 0.735 | 0.6986 | 0.6885 | 0.6921 |
| 0.0093 | 15.0 | 750 | 2.3134 | 0.725 | 0.6830 | 0.6666 | 0.6723 |
| 0.0089 | 16.0 | 800 | 2.1598 | 0.73 | 0.6919 | 0.6860 | 0.6885 |
| 0.0061 | 17.0 | 850 | 2.0879 | 0.75 | 0.7129 | 0.7132 | 0.7125 |
| 0.0024 | 18.0 | 900 | 2.1285 | 0.745 | 0.7062 | 0.7071 | 0.7049 |
| 0.0043 | 19.0 | 950 | 2.1386 | 0.74 | 0.7001 | 0.7003 | 0.6985 |
| 0.0028 | 20.0 | 1000 | 2.1449 | 0.75 | 0.7115 | 0.7093 | 0.7103 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
SkyR/albert-base-ours-run-2 | SkyR | 2023-12-20T18:56:05Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-12T18:39:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: albert-base-v2
model-index:
- name: albert-base-ours-run-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2462
- Accuracy: 0.695
- Precision: 0.6550
- Recall: 0.6529
- F1: 0.6539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.999 | 1.0 | 200 | 0.9155 | 0.615 | 0.5590 | 0.5590 | 0.5524 |
| 0.7736 | 2.0 | 400 | 0.8488 | 0.6 | 0.5639 | 0.5689 | 0.5256 |
| 0.5836 | 3.0 | 600 | 0.8760 | 0.67 | 0.6259 | 0.6158 | 0.6191 |
| 0.4153 | 4.0 | 800 | 1.0050 | 0.675 | 0.6356 | 0.6212 | 0.5974 |
| 0.3188 | 5.0 | 1000 | 1.2033 | 0.655 | 0.6254 | 0.5977 | 0.5991 |
| 0.2335 | 6.0 | 1200 | 1.3407 | 0.625 | 0.5955 | 0.6039 | 0.5937 |
| 0.1752 | 7.0 | 1400 | 1.4246 | 0.72 | 0.6846 | 0.6815 | 0.6820 |
| 0.1056 | 8.0 | 1600 | 1.9654 | 0.69 | 0.6589 | 0.6251 | 0.6311 |
| 0.0696 | 9.0 | 1800 | 1.9376 | 0.715 | 0.6908 | 0.6632 | 0.6627 |
| 0.0352 | 10.0 | 2000 | 1.9970 | 0.72 | 0.6880 | 0.6784 | 0.6817 |
| 0.0227 | 11.0 | 2200 | 2.1449 | 0.705 | 0.6901 | 0.6641 | 0.6679 |
| 0.0199 | 12.0 | 2400 | 2.2213 | 0.72 | 0.6891 | 0.6685 | 0.6749 |
| 0.0077 | 13.0 | 2600 | 2.1500 | 0.69 | 0.6729 | 0.6704 | 0.6647 |
| 0.0067 | 14.0 | 2800 | 2.1780 | 0.69 | 0.6632 | 0.6651 | 0.6621 |
| 0.0034 | 15.0 | 3000 | 2.1759 | 0.71 | 0.6800 | 0.6786 | 0.6788 |
| 0.0013 | 16.0 | 3200 | 2.2139 | 0.71 | 0.6760 | 0.6721 | 0.6735 |
| 0.0005 | 17.0 | 3400 | 2.2282 | 0.7 | 0.6606 | 0.6593 | 0.6599 |
| 0.0003 | 18.0 | 3600 | 2.2257 | 0.7 | 0.6606 | 0.6593 | 0.6599 |
| 0.0003 | 19.0 | 3800 | 2.2492 | 0.695 | 0.6550 | 0.6529 | 0.6539 |
| 0.0002 | 20.0 | 4000 | 2.2462 | 0.695 | 0.6550 | 0.6529 | 0.6539 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/albert-base-ours-run-1 | SkyR | 2023-12-20T18:55:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-12T18:16:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: albert-base-v2
model-index:
- name: albert-base-ours-run-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-1
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3970
- Accuracy: 0.735
- Precision: 0.7033
- Recall: 0.6790
- F1: 0.6873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9719 | 1.0 | 200 | 0.8460 | 0.635 | 0.6534 | 0.5920 | 0.5547 |
| 0.7793 | 2.0 | 400 | 0.7762 | 0.675 | 0.6965 | 0.6323 | 0.5936 |
| 0.5734 | 3.0 | 600 | 0.8149 | 0.67 | 0.6200 | 0.6192 | 0.6196 |
| 0.3877 | 4.0 | 800 | 0.9555 | 0.7 | 0.6724 | 0.6482 | 0.6549 |
| 0.2426 | 5.0 | 1000 | 1.1248 | 0.695 | 0.6529 | 0.6437 | 0.6452 |
| 0.183 | 6.0 | 1200 | 1.3497 | 0.705 | 0.6717 | 0.6489 | 0.6563 |
| 0.1011 | 7.0 | 1400 | 1.6369 | 0.7 | 0.6620 | 0.6532 | 0.6560 |
| 0.0602 | 8.0 | 1600 | 1.8171 | 0.7 | 0.6763 | 0.6615 | 0.6654 |
| 0.0335 | 9.0 | 1800 | 1.9601 | 0.695 | 0.6640 | 0.6490 | 0.6545 |
| 0.0158 | 10.0 | 2000 | 2.0206 | 0.71 | 0.6802 | 0.6751 | 0.6768 |
| 0.0148 | 11.0 | 2200 | 2.0881 | 0.675 | 0.6252 | 0.6242 | 0.6232 |
| 0.0057 | 12.0 | 2400 | 2.2708 | 0.735 | 0.7146 | 0.6790 | 0.6904 |
| 0.0079 | 13.0 | 2600 | 2.2348 | 0.72 | 0.6917 | 0.6659 | 0.6746 |
| 0.0018 | 14.0 | 2800 | 2.2978 | 0.725 | 0.6968 | 0.6662 | 0.6761 |
| 0.0025 | 15.0 | 3000 | 2.3180 | 0.735 | 0.7067 | 0.6790 | 0.6883 |
| 0.0028 | 16.0 | 3200 | 2.3910 | 0.74 | 0.7153 | 0.6854 | 0.6953 |
| 0.0002 | 17.0 | 3400 | 2.3830 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
| 0.0002 | 18.0 | 3600 | 2.3899 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
| 0.0001 | 19.0 | 3800 | 2.3922 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
| 0.0001 | 20.0 | 4000 | 2.3970 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/albert-base-ours-run-4 | SkyR | 2023-12-20T18:55:46Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-12T19:25:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: albert-base-v2
model-index:
- name: albert-base-ours-run-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-4
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9565
- Accuracy: 0.72
- Precision: 0.6790
- Recall: 0.6770
- F1: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0253 | 1.0 | 200 | 0.8974 | 0.605 | 0.7186 | 0.5341 | 0.4555 |
| 0.8121 | 2.0 | 400 | 0.8260 | 0.675 | 0.6792 | 0.6308 | 0.6112 |
| 0.6153 | 3.0 | 600 | 0.8504 | 0.66 | 0.6180 | 0.6026 | 0.6073 |
| 0.441 | 4.0 | 800 | 0.8917 | 0.685 | 0.6463 | 0.6385 | 0.6403 |
| 0.3273 | 5.0 | 1000 | 0.9384 | 0.69 | 0.6534 | 0.6602 | 0.6561 |
| 0.2138 | 6.0 | 1200 | 1.3501 | 0.705 | 0.6573 | 0.6374 | 0.6388 |
| 0.1435 | 7.0 | 1400 | 1.4614 | 0.71 | 0.6693 | 0.6553 | 0.6601 |
| 0.1202 | 8.0 | 1600 | 1.5825 | 0.7 | 0.6648 | 0.6592 | 0.6530 |
| 0.0587 | 9.0 | 1800 | 1.7755 | 0.72 | 0.6839 | 0.6849 | 0.6840 |
| 0.0237 | 10.0 | 2000 | 1.7240 | 0.735 | 0.6960 | 0.6924 | 0.6940 |
| 0.018 | 11.0 | 2200 | 1.7230 | 0.745 | 0.7105 | 0.7003 | 0.7026 |
| 0.0096 | 12.0 | 2400 | 1.7812 | 0.75 | 0.7225 | 0.7142 | 0.7158 |
| 0.006 | 13.0 | 2600 | 1.8223 | 0.75 | 0.7265 | 0.7082 | 0.7147 |
| 0.0033 | 14.0 | 2800 | 1.9872 | 0.76 | 0.7434 | 0.7107 | 0.7188 |
| 0.003 | 15.0 | 3000 | 1.8818 | 0.72 | 0.6778 | 0.6766 | 0.6765 |
| 0.0027 | 16.0 | 3200 | 1.9816 | 0.75 | 0.7125 | 0.6990 | 0.7043 |
| 0.002 | 17.0 | 3400 | 1.9268 | 0.725 | 0.6832 | 0.6834 | 0.6825 |
| 0.0023 | 18.0 | 3600 | 1.9456 | 0.73 | 0.6913 | 0.6898 | 0.6898 |
| 0.0025 | 19.0 | 3800 | 1.9543 | 0.72 | 0.6790 | 0.6770 | 0.6766 |
| 0.0016 | 20.0 | 4000 | 1.9565 | 0.72 | 0.6790 | 0.6770 | 0.6766 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/hing-mbert-ours-run-2 | SkyR | 2023-12-20T18:55:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:l3cube-pune/hing-mbert",
"base_model:finetune:l3cube-pune/hing-mbert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-13T19:00:29Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: l3cube-pune/hing-mbert
model-index:
- name: hing-mbert-ours-run-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-mbert-ours-run-2
This model is a fine-tuned version of [l3cube-pune/hing-mbert](https://huggingface.co/l3cube-pune/hing-mbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3919
- Accuracy: 0.62
- Precision: 0.5759
- Recall: 0.5631
- F1: 0.5669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0284 | 1.0 | 100 | 1.2914 | 0.595 | 0.5712 | 0.4800 | 0.4642 |
| 0.8127 | 2.0 | 200 | 0.8552 | 0.59 | 0.5744 | 0.5675 | 0.4891 |
| 0.5499 | 3.0 | 300 | 1.1212 | 0.645 | 0.6544 | 0.5600 | 0.5475 |
| 0.3433 | 4.0 | 400 | 1.2017 | 0.605 | 0.5872 | 0.5866 | 0.5791 |
| 0.2218 | 5.0 | 500 | 1.8329 | 0.655 | 0.6458 | 0.6064 | 0.6055 |
| 0.1763 | 6.0 | 600 | 2.4194 | 0.655 | 0.6140 | 0.5802 | 0.5871 |
| 0.1022 | 7.0 | 700 | 2.3894 | 0.66 | 0.6171 | 0.6045 | 0.6048 |
| 0.0631 | 8.0 | 800 | 2.8259 | 0.605 | 0.5704 | 0.5255 | 0.5259 |
| 0.0254 | 9.0 | 900 | 2.9135 | 0.65 | 0.6013 | 0.5734 | 0.5784 |
| 0.0316 | 10.0 | 1000 | 3.0548 | 0.62 | 0.5862 | 0.5650 | 0.5670 |
| 0.026 | 11.0 | 1100 | 3.1020 | 0.62 | 0.5722 | 0.5593 | 0.5619 |
| 0.0152 | 12.0 | 1200 | 3.0692 | 0.62 | 0.5685 | 0.5597 | 0.5621 |
| 0.0156 | 13.0 | 1300 | 3.1068 | 0.615 | 0.5771 | 0.5589 | 0.5624 |
| 0.0237 | 14.0 | 1400 | 3.3487 | 0.62 | 0.5924 | 0.5589 | 0.5642 |
| 0.0094 | 15.0 | 1500 | 3.2007 | 0.615 | 0.5665 | 0.5639 | 0.5650 |
| 0.0054 | 16.0 | 1600 | 3.2838 | 0.62 | 0.5807 | 0.5657 | 0.5690 |
| 0.005 | 17.0 | 1700 | 3.2258 | 0.615 | 0.5846 | 0.5723 | 0.5747 |
| 0.005 | 18.0 | 1800 | 3.3572 | 0.63 | 0.5827 | 0.5698 | 0.5736 |
| 0.0022 | 19.0 | 1900 | 3.3642 | 0.62 | 0.5759 | 0.5631 | 0.5669 |
| 0.0019 | 20.0 | 2000 | 3.3919 | 0.62 | 0.5759 | 0.5631 | 0.5669 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
Subsets and Splits