modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-02 12:28:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 408
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-02 12:27:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF | mradermacher | "2024-12-16T19:16:47Z" | 5 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B",
"base_model:quantized:zelk12/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-16T16:29:04Z" | ---
base_model: zelk12/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B-GGUF/resolve/main/MT1-Gen4-BB-gemma-2-MTg2MTMM2MU-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
visdata/tum1 | visdata | "2025-02-12T17:18:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-12T17:12:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minhcrafters/Meta-Llama-3.1-8B-Instruct-pychael-LoRA | minhcrafters | "2025-02-05T12:32:41Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-05T12:32:30Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhcrafters
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eagalon/GPT-NeoX-20B-Erebus-Q4_K_S-GGUF | Eagalon | "2024-09-25T18:14:17Z" | 84 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:KoboldAI/GPT-NeoX-20B-Erebus",
"base_model:quantized:KoboldAI/GPT-NeoX-20B-Erebus",
"license:apache-2.0",
"region:us"
] | null | "2024-09-25T18:13:28Z" | ---
base_model: KoboldAI/GPT-NeoX-20B-Erebus
language: en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
inference: false
---
# Eagalon/GPT-NeoX-20B-Erebus-Q4_K_S-GGUF
This model was converted to GGUF format from [`KoboldAI/GPT-NeoX-20B-Erebus`](https://huggingface.co/KoboldAI/GPT-NeoX-20B-Erebus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/KoboldAI/GPT-NeoX-20B-Erebus) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Eagalon/GPT-NeoX-20B-Erebus-Q4_K_S-GGUF --hf-file gpt-neox-20b-erebus-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Eagalon/GPT-NeoX-20B-Erebus-Q4_K_S-GGUF --hf-file gpt-neox-20b-erebus-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Eagalon/GPT-NeoX-20B-Erebus-Q4_K_S-GGUF --hf-file gpt-neox-20b-erebus-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Eagalon/GPT-NeoX-20B-Erebus-Q4_K_S-GGUF --hf-file gpt-neox-20b-erebus-q4_k_s.gguf -c 2048
```
|
abandhu/llama-3-8b-Instruct-bnb-4bit-abandhu-V9 | abandhu | "2025-01-14T13:14:53Z" | 33 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-14T13:11:56Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abandhu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
digitaljungle/a2c-AntBulletEnv-v0 | digitaljungle | "2023-07-29T14:44:12Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-29T14:43:16Z" | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1449.28 +/- 69.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aleegis10/c21f72ca-33c4-4fcb-bd7d-55f19ba55d96 | aleegis10 | "2025-01-19T19:51:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | "2025-01-19T19:28:23Z" | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c21f72ca-33c4-4fcb-bd7d-55f19ba55d96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 60b8bbb767608822_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/60b8bbb767608822_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis10/c21f72ca-33c4-4fcb-bd7d-55f19ba55d96
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/60b8bbb767608822_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55a5daee-2eae-4f24-a147-bbbbf055ce62
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55a5daee-2eae-4f24-a147-bbbbf055ce62
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c21f72ca-33c4-4fcb-bd7d-55f19ba55d96
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 20.2677 | 0.0007 | 1 | 1.9246 |
| 14.0511 | 0.0327 | 50 | 1.8728 |
| 10.4552 | 0.0653 | 100 | 1.7498 |
| 10.4083 | 0.0980 | 150 | 1.6584 |
| 13.0913 | 0.1307 | 200 | 1.6307 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bigmorning/whisper_4_with_init_sun_syl_wd_0_lr_7en5_0005 | bigmorning | "2023-09-13T09:30:29Z" | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-09-13T09:30:21Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_syl_wd_0_lr_7en5_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_syl_wd_0_lr_7en5_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0826
- Train Accuracy: 0.0133
- Train Wermet: 0.7361
- Train Wermet Syl: 0.6973
- Validation Loss: 2.7406
- Validation Accuracy: 0.0139
- Validation Wermet: 0.7479
- Validation Wermet Syl: 0.7097
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 7e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 4.9248 | 0.0113 | 1.1777 | 1.1068 | 3.9252 | 0.0115 | 0.9215 | 0.8812 | 0 |
| 4.6658 | 0.0117 | 0.8497 | 0.8066 | 3.9186 | 0.0113 | 0.9766 | 0.9699 | 1 |
| 4.6144 | 0.0118 | 0.8224 | 0.7735 | 3.8791 | 0.0115 | 0.9336 | 0.9042 | 2 |
| 4.5388 | 0.0120 | 0.7917 | 0.7466 | 3.6581 | 0.0120 | 0.8508 | 0.8073 | 3 |
| 4.0826 | 0.0133 | 0.7361 | 0.6973 | 2.7406 | 0.0139 | 0.7479 | 0.7097 | 4 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
mvkolos/sd-class-butterflies-32 | mvkolos | "2024-05-21T10:34:40Z" | 46 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2024-05-21T10:34:27Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('mvkolos/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
youssefJedidi/phi_4_mini-med-Q4_K_M-GGUF | youssefJedidi | "2025-03-14T06:29:36Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"phi3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:youssefJedidi/phi_4_mini-med",
"base_model:quantized:youssefJedidi/phi_4_mini-med",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-14T06:28:01Z" | ---
base_model: youssefJedidi/phi_4_mini-med
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- phi3
- llama-cpp
- gguf-my-repo
---
# youssefJedidi/phi_4_mini-med-Q4_K_M-GGUF
This model was converted to GGUF format from [`youssefJedidi/phi_4_mini-med`](https://huggingface.co/youssefJedidi/phi_4_mini-med) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/youssefJedidi/phi_4_mini-med) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo youssefJedidi/phi_4_mini-med-Q4_K_M-GGUF --hf-file phi_4_mini-med-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo youssefJedidi/phi_4_mini-med-Q4_K_M-GGUF --hf-file phi_4_mini-med-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo youssefJedidi/phi_4_mini-med-Q4_K_M-GGUF --hf-file phi_4_mini-med-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo youssefJedidi/phi_4_mini-med-Q4_K_M-GGUF --hf-file phi_4_mini-med-q4_k_m.gguf -c 2048
```
|
datek/Qwen-Qwen1.5-1.8B-1717770688 | datek | "2024-06-07T14:33:40Z" | 150 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T14:32:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CausalLM/EarlyFailures7B | CausalLM | "2023-10-23T06:44:42Z" | 68 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"en",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-05T12:36:04Z" | ---
license: gpl-3.0
language:
- en
- zh
tags:
- llama
- llama2
- qwen
---
This is a sample where the improper initialization was used, resulting in limited performance. |
DriveMyScream/Fake_News_Classification_model | DriveMyScream | "2023-09-10T16:31:45Z" | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | "2023-09-10T16:30:23Z" | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
rcarrata/rcarrata-finetuning-sentiment-model-3000-samples | rcarrata | "2023-08-12T11:24:01Z" | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-12T11:17:20Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: rcarrata-finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rcarrata-finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3195
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Miwa-Keita/zenz-v2.5-xsmall | Miwa-Keita | "2025-01-13T16:13:06Z" | 21 | 2 | null | [
"safetensors",
"gpt2",
"japanese input",
"kana kanji conversion",
"text2text-generation",
"ja",
"dataset:Miwa-Keita/zenz-v2.5-dataset",
"license:cc-by-sa-4.0",
"region:us"
] | text2text-generation | "2025-01-13T07:44:03Z" | ---
license: cc-by-sa-4.0
language:
- ja
tags:
- japanese input
- kana kanji conversion
datasets:
- Miwa-Keita/zenz-v2.5-dataset
pipeline_tag: text2text-generation
---
# zenz-v2.5-small
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/663b87e5a14bfb0a2d4914df/gWnbTavSqhhWYJrP6SdQ1.png" alt="zenz-v2 model spec" width="400"/>
</div>
<!-- Provide a quick summary of what the model is/does. -->
zenz-v2.5はかな漢字変換タスクに特化したGPT-2アーキテクチャの条件付き言語モデルです。ニューラルかな漢字変換システム「Zenzai」で利用することを想定しています。
* 文字単位+バイト単位BPEトークナイザー
* かな漢字変換タスクにおいて高い性能
* 文脈を考慮した変換で高い性能を発揮
zenz-v2.5は3種類のモデルサイズで公開しています。
* **[zenz-v2.5-medium](https://huggingface.co/Miwa-Keita/zenz-v2.5-medium)**: 310Mの大規模モデル
* **[zenz-v2.5-small](https://huggingface.co/Miwa-Keita/zenz-v2.5-small)**: 91Mの中規模モデル
* **[zenz-v2.5-xsmall](https://huggingface.co/Miwa-Keita/zenz-v2.5-xsmall)**: 26Mの小規模モデル
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
[CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja)で提供されている[ku-nlp/gpt2-small-japanese-char](https://huggingface.co/ku-nlp/gpt2-small-japanese-char)のトークナイザを利用しています。
本モデルは[CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja)で提供します。
- **Developed by:** Keita Miwa ([𝕏](https://twitter.com/miwa_ensan))
- **Model type:** GPT-2
- **Language(s) (NLP):** Japanese
- **License:** CC-BY-SA 4.0
### Model Sources
<!-- Provide the basic links for the model. -->
本モデルはZenzai(AzooKeyKanaKanjiConverter)と共に利用することを想定して構築しています。
- **Repository:** https://github.com/ensan-hcl/AzooKeyKanaKanjiConverter
### Data Sources
本モデルは[zenz-v2.5-dataset](https://huggingface.co/datasets/Miwa-Keita/zenz-v2.5-dataset)を利用して構築しました。
## Acknowledgements
本モデルの構築にあたり、さくらインターネット株式会社様より計算資源の支援をいただきました。感謝申し上げます。
また、以下のライブラリやツール、言語資源を活用して本モデルを構築しました。
* MeCab (https://taku910.github.io/mecab/)
* ipadic-NEologd (https://github.com/neologd/mecab-ipadic-neologd)
* torch (https://pypi.org/project/torch/)
* transformers (https://pypi.org/project/transformers/)
* datasets (https://pypi.org/project/datasets/)
* jaconv (https://pypi.org/project/jaconv/)
* llama.cpp (https://github.com/ggerganov/llama.cpp)
* llm.c (https://github.com/karpathy/llm.c) |
RichardErkhov/wolfram_-_miqu-1-120b-gguf | RichardErkhov | "2024-10-26T23:12:50Z" | 5 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-25T17:48:24Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
miqu-1-120b - GGUF
- Model creator: https://huggingface.co/wolfram/
- Original model: https://huggingface.co/wolfram/miqu-1-120b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [miqu-1-120b.Q2_K.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q2_K | 41.15GB |
| [miqu-1-120b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | IQ3_XS | 45.78GB |
| [miqu-1-120b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | IQ3_S | 48.4GB |
| [miqu-1-120b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q3_K_S | 48.25GB |
| [miqu-1-120b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | IQ3_M | 50.05GB |
| [miqu-1-120b.Q3_K.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q3_K | 53.85GB |
| [miqu-1-120b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q3_K_M | 53.85GB |
| [miqu-1-120b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q3_K_L | 58.68GB |
| [miqu-1-120b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | IQ4_XS | 60.36GB |
| [miqu-1-120b.Q4_0.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q4_0 | 63.1GB |
| [miqu-1-120b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | IQ4_NL | 63.7GB |
| [miqu-1-120b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q4_K_S | 63.57GB |
| [miqu-1-120b.Q4_K.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q4_K | 67.19GB |
| [miqu-1-120b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q4_K_M | 67.19GB |
| [miqu-1-120b.Q4_1.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q4_1 | 70.09GB |
| [miqu-1-120b.Q5_0.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q5_0 | 77.08GB |
| [miqu-1-120b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q5_K_S | 77.08GB |
| [miqu-1-120b.Q5_K.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q5_K | 79.18GB |
| [miqu-1-120b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q5_K_M | 79.18GB |
| [miqu-1-120b.Q5_1.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q5_1 | 84.06GB |
| [miqu-1-120b.Q6_K.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q6_K | 91.93GB |
| [miqu-1-120b.Q8_0.gguf](https://huggingface.co/RichardErkhov/wolfram_-_miqu-1-120b-gguf/tree/main/) | Q8_0 | 119.06GB |
Original model description:
---
base_model:
- 152334H/miqu-1-70b-sf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# miqu-1-120b

- EXL2: [2.4bpw](https://huggingface.co/LoneStriker/wolfram_miqu-1-120b-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/LoneStriker/wolfram_miqu-1-120b-2.65bpw-h6-exl2) | [3.0bpw](https://huggingface.co/LoneStriker/wolfram_miqu-1-120b-3.0bpw-h6-exl2) | [4.0bpw](https://huggingface.co/LoneStriker/wolfram_miqu-1-120b-4.0bpw-h6-exl2) | [5.0bpw](https://huggingface.co/LoneStriker/wolfram_miqu-1-120b-5.0bpw-h6-exl2)
- GGUF: [Q2_K-Q5_K_M](https://huggingface.co/LoneStriker/wolfram_miqu-1-120b-GGUF/) | [IQ3_XXS](https://huggingface.co/wolfram/miqu-1-120b-GGUF)
This is a 120b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
Inspired by [Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2), [MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b), and [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker)!
Also available: [miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0) – Miqu's younger, fresher sister; a new and improved Goliath-like merge of Miqu and lzlv.
## Review
u/SomeOddCodeGuy wrote on r/LocalLLaMA:
> I love this model. It's slow as Christmas but it's SO GOOD. You did great on this.
>
> But this model is close to getting me to shut down my ChatGPT 4 subscription lol. Between it, Deepseek and a couple others, I'm not sure I'll be using ChatGPT much anymore.
>
> Im using the Q8 at 16k, and I can't express how true it remains to its context. I might try to do some testing this weekend, but its great so far.
>
> I've been using your miqu-1 the past two days and its phenomenal. It understands everything I'm saying in ways only ChatGPT did. I've been purposefully getting more and more vague/relaxed in my speaking, and talking about the most inane stuff, and it just follows right along like a person would.
>
> Miqu-1 does ignore instructions a little. I tried to make a more sarcastic/insulting AI assistant to chat with, and specifically told it (multiple times after a few tries) to not apologize to me after, and it wouldn't stop. So if it made a jab like "Wow, great work spelling that word. Quite the whiz kid huh?", making fun of me for misspelling something, it would refuse to not follow up with "Seriously, though, sometimes misspellings happen" lol. But that's the only issue I've had with it.
(Note: All I did was merge this, though, so the credit mostly belongs to [Mistral AI](https://mistral.ai/) (giving proper attribution!) and the creators of [mergekit](https://github.com/arcee-ai/mergekit) as well as [Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2) and [MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b) who inspired it.)
## Model Details
- Max Context: 32764 tokens (kept the weird number from the original/base model)
- Layers: 140
### Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
See also: [🐺🐦⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 20]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [10, 30]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [20, 40]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [30, 50]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [40, 60]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [50, 70]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [60, 80]
model: 152334H/miqu-1-70b-sf
```
## Credits & Special Thanks
- original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
- ⭐⭐⭐ **[Use their newer, better, official models here!](https://console.mistral.ai/)** ⭐⭐⭐
- leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
- f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
- mergekit_config.yml: [nsfwthrowitaway69/Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2)
### Support
- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
## Disclaimer
*This model contains leaked weights and due to its content it should not be used by anyone.* 😜
But seriously:
### License
**What I *know*:** [Weights produced by a machine are not copyrightable](https://www.reddit.com/r/LocalLLaMA/comments/1amc080/psa_if_you_use_miqu_or_a_derivative_please_keep/kpmamte/) so there is no copyright owner who could grant permission or a license to use, or restrict usage, once you have acquired the files.
### Ethics
**What I *believe*:** All generative AI, including LLMs, only exists because it is trained mostly on human data (both public domain and copyright-protected, most likely acquired without express consent) and possibly synthetic data (which is ultimately derived from human data, too). It is only fair if something that is based on everyone's knowledge and data is also freely accessible to the public, the actual creators of the underlying content. Fair use, fair AI!
|
abhayesian/LLama2_HarmBench_LAT_3 | abhayesian | "2024-06-06T04:18:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T04:18:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF | MaziyarPanahi | "2024-01-26T06:36:20Z" | 92 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp",
"base_model:quantized:MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp",
"conversational"
] | text-generation | "2024-01-26T05:08:50Z" | ---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF
base_model: MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp)
## Description
[MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF) contains GGUF format model files for [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF) and below it, a specific filename to download, such as: OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./OpenHermes-2.5-neural-chat-7b-v3-2-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
ReadyArt/Progenitor-V1.1-LLaMa-70B_EXL2_4.5bpw_H8 | ReadyArt | "2025-01-30T18:21:48Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2025-01-30T18:14:16Z" | ---
base_model:
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/70B-L3.3-Cirrus-x1
- TheDrummer/Anubis-70B-v1
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---
This model is part of a series of experiments in merging some of my favorite Llama models, an idea which was based on the excellent Steelskull/L3.3-MS-Nevoria-70b merge, just with a couple of extra ingredients and different merge methods. Here I tried a Della Linear merge with aggressive parameters. The results came out really nice, I really enjoy this model.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base.
### Models Merged
The following models were included in the merge:
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
* [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1)
* [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.7
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.20
density: 0.7
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.7
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.20
density: 0.7
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
density: 0.7
merge_method: della_linear
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
epsilon: 0.2
lambda: 1.1
dtype: bfloat16
tokenizer_source: base
```
|
kleverer/dippy_4 | kleverer | "2025-03-26T04:44:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-26T00:15:41Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9140
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
rootacess/ppo-1_LunarLander-v2 | rootacess | "2023-03-05T12:27:25Z" | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-05T12:26:58Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.98 +/- 20.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abukashan/llama2-qlora-finetunined | abukashan | "2023-07-24T11:55:09Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-24T11:54:53Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
mpasila/NordicAlpaca-Finnish-V1-7B | mpasila | "2024-05-15T17:27:43Z" | 8 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fi",
"dataset:pinzhenchen/alpaca-cleaned-fi",
"base_model:HPLT/gpt-7b-nordic-prerelease",
"base_model:finetune:HPLT/gpt-7b-nordic-prerelease",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-19T15:16:13Z" | ---
language:
- fi
base_model: HPLT/gpt-7b-nordic-prerelease
license: apache-2.0
datasets:
- pinzhenchen/alpaca-cleaned-fi
---
# Model Card for NordicAlpaca-Finnish-V1-7B
This is a merge of [mpasila/NordicAlpaca-Finnish-V1-7B-LoRA](https://huggingface.co/mpasila/NordicAlpaca-Finnish-V1-7B-LoRA/).
Dataset used with the LoRA is [pinzhenchen/alpaca-cleaned-fi](https://huggingface.co/datasets/pinzhenchen/alpaca-cleaned-fi/).
Base model used: [HPLT/gpt-7b-nordic-prerelease](https://huggingface.co/HPLT/gpt-7b-nordic-prerelease/)
It uses Alpaca format but with a translated instruction at the start:
```
{
"instruction,output": "Alla on ohje, jossa kuvataan tehtävä. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%",
"instruction,input,output": "Alla on ohje, jossa kuvataan tehtävä ja joka on yhdistetty kontekstia lisäävään syötteeseen. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%"
}
```
Merged using this [Colab notebook](https://colab.research.google.com/drive/1a76Y21GfPtmVs71Uztlgk2xzPA4_vVjs?usp=sharing). It might not be the best way to merge a quantized LoRA on to a float16 model but I just wanted to quickly do something. You can try merging it better if you want.
## Evaluation
| Model | Size | Type | FIN-bench (score) |
|-------|------|------|-------|
| **mpasila/NordicAlpaca-Finnish-V1-7B** | 7B | Instruct | 0.3891 |
| [mpasila/Finnish-Alpaca-Small-7B](https://huggingface.co/mpasila/Finnish-Alpaca-Small-7B) | 7B | Instruct | 0.3586 |
| [mpasila/Finnish-Alpaca-Tiny-V2-7B](https://huggingface.co/mpasila/Finnish-Alpaca-Tiny-V2-7B) | 7B | Instruct | **0.4654** |
| [mpasila/Alpacazord-Viking-7B](https://huggingface.co/mpasila/Alpacazord-Viking-7B) | 7B | Instruct | 0.4123 |
| [mpasila/Finnish-Viking-Alpaca-V1-7B](https://huggingface.co/mpasila/Finnish-Viking-Alpaca-V1-7B) | 7B | Instruct | 0.3943 |
| [Finnish-NLP/llama-7b-finnish-instruct-v0.1](https://huggingface.co/Finnish-NLP/llama-7b-finnish-instruct-v0.1) | 7B | Instruct | 0.4365 |
| [Finnish-NLP/llama-7b-finnish-instruct-v0.2](https://huggingface.co/Finnish-NLP/llama-7b-finnish-instruct-v0.2) | 7B | Instruct | 0.3993 |
| [Finnish-NLP/llama-7b-finnish](https://huggingface.co/Finnish-NLP/llama-7b-finnish) | 7B | Base | 0.2350 |
| [LumiOpen/Viking-7B (1000B)](https://huggingface.co/LumiOpen/Viking-7B) | 7B | Base | 0.3721 |
| [HPLT/gpt-7b-nordic-prerelease](https://huggingface.co/HPLT/gpt-7b-nordic-prerelease) | 7B | Base | 0.3169 |
[Source](https://docs.google.com/spreadsheets/d/1rqJb9dQVihg-Z1_Ras1L_-wuzPg9xNzpdmM2x5HueeY/edit?usp=sharing)
#### FIN-bench scores:
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_analogies | 0|multiple_choice_grade|0.5615|± |0.0437|
|bigbench_arithmetic_1_digit_addition | 0|multiple_choice_grade|0.5300|± |0.0502|
|bigbench_arithmetic_1_digit_division | 0|multiple_choice_grade|0.8261|± |0.0808|
|bigbench_arithmetic_1_digit_multiplication | 0|multiple_choice_grade|0.3700|± |0.0485|
|bigbench_arithmetic_1_digit_subtraction | 0|multiple_choice_grade|0.6000|± |0.0492|
|bigbench_arithmetic_2_digit_addition | 0|multiple_choice_grade|0.0300|± |0.0171|
|bigbench_arithmetic_2_digit_division | 0|multiple_choice_grade|0.4700|± |0.0502|
|bigbench_arithmetic_2_digit_multiplication | 0|multiple_choice_grade|0.1000|± |0.0302|
|bigbench_arithmetic_2_digit_subtraction | 0|multiple_choice_grade|0.1800|± |0.0386|
|bigbench_arithmetic_3_digit_addition | 0|multiple_choice_grade|0.3000|± |0.0461|
|bigbench_arithmetic_3_digit_division | 0|multiple_choice_grade|0.2000|± |0.0402|
|bigbench_arithmetic_3_digit_multiplication | 0|multiple_choice_grade|0.1800|± |0.0386|
|bigbench_arithmetic_3_digit_subtraction | 0|multiple_choice_grade|0.2500|± |0.0435|
|bigbench_arithmetic_4_digit_addition | 0|multiple_choice_grade|0.4200|± |0.0496|
|bigbench_arithmetic_4_digit_division | 0|multiple_choice_grade|0.2400|± |0.0429|
|bigbench_arithmetic_4_digit_multiplication | 0|multiple_choice_grade|0.2200|± |0.0416|
|bigbench_arithmetic_4_digit_subtraction | 0|multiple_choice_grade|0.4600|± |0.0501|
|bigbench_arithmetic_5_digit_addition | 0|multiple_choice_grade|0.5400|± |0.0501|
|bigbench_arithmetic_5_digit_division | 0|multiple_choice_grade|0.1100|± |0.0314|
|bigbench_arithmetic_5_digit_multiplication | 0|multiple_choice_grade|0.2400|± |0.0429|
|bigbench_arithmetic_5_digit_subtraction | 0|multiple_choice_grade|0.5200|± |0.0502|
|bigbench_cause_and_effect_one_sentence | 0|multiple_choice_grade|0.6471|± |0.0676|
|bigbench_cause_and_effect_one_sentence_no_prompt| 0|multiple_choice_grade|0.8039|± |0.0561|
|bigbench_cause_and_effect_two_sentences | 0|multiple_choice_grade|0.3529|± |0.0676|
|bigbench_emotions | 0|multiple_choice_grade|0.2938|± |0.0361|
|bigbench_empirical_judgments | 0|multiple_choice_grade|0.3333|± |0.0476|
|bigbench_general_knowledge | 0|multiple_choice_grade|0.2857|± |0.0544|
|bigbench_hhh_alignment_harmless | 0|multiple_choice_grade|0.3448|± |0.0630|
|bigbench_hhh_alignment_helpful | 0|multiple_choice_grade|0.3220|± |0.0614|
|bigbench_hhh_alignment_honest | 0|multiple_choice_grade|0.3729|± |0.0635|
|bigbench_hhh_alignment_other | 0|multiple_choice_grade|0.5581|± |0.0766|
|bigbench_intent_recognition | 0|multiple_choice_grade|0.1777|± |0.0145|
|bigbench_misconceptions | 0|multiple_choice_grade|0.5373|± |0.0432|
|bigbench_paraphrase | 0|multiple_choice_grade|0.4750|± |0.0354|
|bigbench_sentence_ambiguity | 0|multiple_choice_grade|0.4333|± |0.0645|
|bigbench_similarities_abstraction | 0|multiple_choice_grade|0.7237|± |0.0516|
### Framework versions
- PEFT 0.8.2 |
PrunaAI/hrnet_w48.ms_in1k-turbo-green-smashed | PrunaAI | "2024-08-02T15:30:36Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-10T04:18:07Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir hrnet_w48.ms_in1k-turbo-green-smashed
huggingface-cli download PrunaAI/hrnet_w48.ms_in1k-turbo-green-smashed --local-dir hrnet_w48.ms_in1k-turbo-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "hrnet_w48.ms_in1k-turbo-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "hrnet_w48.ms_in1k-turbo-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model hrnet_w48.ms_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF | mradermacher | "2025-03-07T00:17:25Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:rombodawg/rombos_Mistral-Evolved-11b-v0.1",
"base_model:quantized:rombodawg/rombos_Mistral-Evolved-11b-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-06T22:07:07Z" | ---
base_model: rombodawg/rombos_Mistral-Evolved-11b-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rombodawg/rombos_Mistral-Evolved-11b-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 4.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 6.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Mistral-Evolved-11b-v0.1-i1-GGUF/resolve/main/rombos_Mistral-Evolved-11b-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 9.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
JaehyeokLee/20m_em_checkpoint_epoch_1_step_3720 | JaehyeokLee | "2025-02-24T09:02:01Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-24T08:57:50Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- Multi-Linguality: It can support more than 100 working languages.
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
**Some suggestions for retrieval pipeline in RAG:**
We recommend to use following pipeline: hybrid retrieval + re-ranking.
- Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
A classic example: using both embedding retrieval and the BM25 algorithm.
Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
- As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
## News:
- 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
- 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
## Specs
- Model
| Model Name | Dimension | Sequence Length | Introduction |
|:----:|:---:|:---:|:---:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
| [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
| [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
- Data
| Dataset | Introduction |
|:----:|:---:|
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
## FAQ
**1. Introduction for different retrieval methods**
- Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
- Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
- Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
**2. Comparison with BGE-v1.5 and other monolingual models**
BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
unlike most existing models that can only perform dense retrieval.
In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
and users can choose a model that suits their specific needs based on practical considerations,
such as whether to require multilingual or cross-language support, and whether to process long texts.
**3. How to use BGE-M3 in other projects?**
For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
Contributions from the community are welcome.
In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
**Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
). Thanks @jobergum.**
**4. How to fine-tune bge-M3 model?**
You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
to fine-tune the dense embedding.
Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
# {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.19554901123046875
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.0
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# 0.7797
# 0.4620
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
# 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
# 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
# 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
# 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
# }
```
## Evaluation
- Multilingual (Miracl dataset)

- Cross-lingual (MKQA dataset)

- Long Document Retrieval
- MLDR:

Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
covering 13 languages, including test set, validation set, and training set.
We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
We believe that this data will be helpful for the open-source community in training document retrieval models.
- NarritiveQA:

## Training
- Self-knowledge Distillation: combining multiple outputs from different
retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
- Efficient Batching: Improve the efficiency when fine-tuning on long text.
The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
- MCLS: A simple method to improve the performance on long text without fine-tuning.
If you have no enough resource to fine-tuning model with long text, the method is useful.
Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
**The fine-tuning codes and datasets will be open-sourced in the near future.**
## Acknowledgement
Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mrferr3t/acc66ef3-a8e8-43a9-8921-c869e530bd4a | mrferr3t | "2025-02-06T20:30:31Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | "2025-02-06T19:49:28Z" | ---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: acc66ef3-a8e8-43a9-8921-c869e530bd4a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: facebook/opt-350m
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 4fad2e4eb1dbe617_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4fad2e4eb1dbe617_train_data.json
type:
field_instruction: question
field_output: query
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.0001
eval_max_new_tokens: 128
eval_steps: 2400
eval_strategy: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/acc66ef3-a8e8-43a9-8921-c869e530bd4a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0004
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 2400
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps:
micro_batch_size: 32
mlflow_experiment_name: /tmp/4fad2e4eb1dbe617_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 100
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: /workspace/hub_repo/last-checkpoint
s2_attention: null
sample_packing: false
save_steps: 2400
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode:
wandb_name: 9cde4548-4111-4504-9df8-b6ffe4f2675d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9cde4548-4111-4504-9df8-b6ffe4f2675d
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# acc66ef3-a8e8-43a9-8921-c869e530bd4a
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.0086 | 1 | 3.6560 |
| 2.0933 | 2.2833 | 266 | 0.4758 |
| 0.6956 | 4.5665 | 532 | 0.3153 |
| 0.4101 | 6.8498 | 798 | 0.2307 |
| 0.2677 | 9.1330 | 1064 | 0.1927 |
| 0.1842 | 11.4163 | 1330 | 0.1775 |
| 0.1343 | 13.7468 | 1596 | 0.1707 |
| 0.1008 | 16.0300 | 1862 | 0.1552 |
| 0.0774 | 18.3133 | 2128 | 0.1502 |
| 0.0619 | 20.5966 | 2394 | 0.1477 |
| 0.0509 | 22.8798 | 2660 | 0.1518 |
| 0.0427 | 25.1631 | 2926 | 0.1492 |
| 0.0364 | 27.4464 | 3192 | 0.1429 |
| 0.0308 | 29.7296 | 3458 | 0.1441 |
| 0.0279 | 32.0129 | 3724 | 0.1484 |
| 0.023 | 34.2961 | 3990 | 0.1456 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
eglkan1/mBART-TextSimp-LT-BatchSize2-lr1e-4 | eglkan1 | "2024-04-11T08:48:30Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-09T13:06:06Z" | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
metrics:
- rouge
- sacrebleu
model-index:
- name: mBART-TextSimp-LT-BatchSize2-lr1e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBART-TextSimp-LT-BatchSize2-lr1e-4
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0962
- Rouge1: 0.76
- Rouge2: 0.6246
- Rougel: 0.7508
- Sacrebleu: 53.9078
- Gen Len: 32.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0639 | 1.0 | 418 | 0.0779 | 0.7012 | 0.5432 | 0.6904 | 43.0798 | 32.9976 |
| 0.0653 | 2.0 | 836 | 0.0732 | 0.7197 | 0.5593 | 0.7091 | 44.8483 | 32.9976 |
| 0.0327 | 3.0 | 1254 | 0.0726 | 0.7319 | 0.5787 | 0.7206 | 47.842 | 32.9976 |
| 0.0168 | 4.0 | 1672 | 0.0782 | 0.7466 | 0.6031 | 0.7371 | 50.9225 | 32.9976 |
| 0.013 | 5.0 | 2090 | 0.0804 | 0.7507 | 0.6077 | 0.7409 | 51.8293 | 32.9976 |
| 0.0032 | 6.0 | 2508 | 0.0846 | 0.7606 | 0.6237 | 0.7507 | 53.5224 | 32.9976 |
| 0.0012 | 7.0 | 2926 | 0.0911 | 0.7597 | 0.6263 | 0.751 | 54.0182 | 32.9976 |
| 0.0012 | 8.0 | 3344 | 0.0962 | 0.76 | 0.6246 | 0.7508 | 53.9078 | 32.9976 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.4
- Tokenizers 0.13.3
|
goodcoco/riGrem | goodcoco | "2024-07-12T12:48:12Z" | 4 | 0 | transformers | [
"transformers",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-12T12:00:59Z" | ---
license: apache-2.0
---
|
Ayush-1722/Llama-2-7b-chat-Summarize-16K-LoRANET-Merged | Ayush-1722 | "2024-05-16T09:31:52Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"conversational",
"en",
"arxiv:2307.09288",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-16T06:38:55Z" | ---
extra_gated_heading: You need to share contact information with Meta to access this model
extra_gated_prompt: >-
### LLAMA 2 COMMUNITY LICENSE AGREEMENT
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at
https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or
element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual
property or other rights owned by Meta embodied in the Llama Materials to
use, reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this
Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama
Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee's affiliates, is greater than 700 million monthly
active users in the preceding calendar month, you must request a license from
Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to exercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing
the Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
the Llama Materials or Llama 2 outputs or results, or any portion of any of
the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under
this Agreement shall terminate as of the date such litigation or claim is
filed or instituted. You will indemnify and hold harmless Meta from and
against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California
shall have exclusive jurisdiction of any dispute arising out of this
Agreement.
### Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 2. If you access or use Llama 2, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at
[ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
#### Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not
use, or allow others to use, Llama 2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [[email protected]](mailto:[email protected])
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: llama2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)|
|70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)| |
susanwilins/VitalRizeOfficialWebsite | susanwilins | "2025-02-28T11:08:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-28T11:08:46Z" | <p><span style="font-weight: 400;"><a href="https://www.wellbioways.com/vitalrize-male-enhancement/"><strong>VitalRize</strong></a> is a men's vitality, stamina, and energy health supplement. Through the power of ancient plant extracts and modern formula technology, it activates body power, confidence, and energy facilitation. In this review that follows, we are going to examine the supplement objectively.</span></p>
<p><strong>Where to Buy VitalRize?</strong></p>
<p><span style="font-weight: 400;">You can purchase <a href="https://www.wellbioways.com/vitalrize-male-enhancement/"><strong>VitalRize Male Enhancement</strong></a> directly from the company website. Purchasing at the company website is a promise that you are purchasing an authentic product, and you don't need to settle for anything less just so you can navigate through package deals and company promo offers.</span></p>
<p><strong>⧳⧳═★┈┈┈┈Shop Now ┈┈┈┈★═⧳⧳</strong></p>
<p><a href="https://www.wellbioways.com/Buy-VitalRize"><strong>https://www.wellbioways.com/Buy-VitalRize</strong></a></p>
<p><strong>➽ ➽ Official Website:- </strong><a href="https://www.wellbioways.com/vitalrize-male-enhancement/"><strong>https://www.wellbioways.com/vitalrize-male-enhancement/</strong></a></p>
<p><strong>➤➤ Buy Now To The Official Website ➤➤</strong></p>
<p><a href="https://www.facebook.com/groups/vitalrizecapsules/"><strong>https://www.facebook.com/groups/vitalrizecapsules/</strong></a></p>
<p><a href="https://www.facebook.com/events/668275035638414/"><strong>https://www.facebook.com/events/668275035638414/</strong></a></p>
<p><a href="https://colab.research.google.com/drive/1zJ7P_C_T_N6OIYJYCl3eRg4uKWjAnUnT"><strong>https://colab.research.google.com/drive/1zJ7P_C_T_N6OIYJYCl3eRg4uKWjAnUnT</strong></a><strong>?</strong></p>
<p><a href="https://sites.google.com/view/vitalrizeofficial/home"><strong>https://sites.google.com/view/vitalrizeofficial/home</strong></a></p>
<p><a href="https://www.pinterest.com/VitalRizeCapsule/"><strong>https://www.pinterest.com/VitalRizeCapsule/</strong></a></p>
<p><a href="https://www.pinterest.com/VitalRizeMaleEnhancementPrice/"><strong>https://www.pinterest.com/VitalRizeMaleEnhancementPrice/</strong></a></p>
<p><a href="https://teeshopper.in/store/VitalRize-Capsule"><strong>https://teeshopper.in/store/VitalRize-Capsule</strong></a></p>
<p><a href="https://teeshopper.in/store/VitalRize-Male-Enhancement"><strong>https://teeshopper.in/store/VitalRize-Male-Enhancement</strong></a></p>
<p><a href="https://github.com/cheryishinn/VitalRize-Capsules-Price"><strong>https://github.com/cheryishinn/VitalRize-Capsules-Price</strong></a></p>
<p><a href="https://github.com/cheryishinn/VitalRize-Male-Enhancement"><strong>https://github.com/cheryishinn/VitalRize-Male-Enhancement</strong></a></p>
<p><a href="https://www.twibbonize.com/vitalrizecapsules"><strong>https://www.twibbonize.com/vitalrizecapsules</strong></a></p>
<p><a href="https://www.twibbonize.com/vitalrizemaleenhancement"><strong>https://www.twibbonize.com/vitalrizemaleenhancement</strong></a></p>
<p><a href="https://www.italki.com/en/post/SXmkGmWPbZj6Lpkcn7ImW9"><strong>https://www.italki.com/en/post/SXmkGmWPbZj6Lpkcn7ImW9</strong></a></p>
<p><a href="https://www.italki.com/en/post/u6FNDJUdmmD9DM9a4D0nML"><strong>https://www.italki.com/en/post/u6FNDJUdmmD9DM9a4D0nML</strong></a></p>
<p><a href="https://startupcentrum.com/tech-center/vitalrize-official-website"><strong>https://startupcentrum.com/tech-center/vitalrize-official-website</strong></a></p>
<p><a href="https://startupcentrum.com/tech-center/vitalrize-male-enhancement"><strong>https://startupcentrum.com/tech-center/vitalrize-male-enhancement</strong></a></p>
<p><a href="https://vitalrizemaleenhancementprice.company.site/"><strong>https://vitalrizemaleenhancementprice.company.site/</strong></a></p>
<p><a href="https://vitalrizemaleenhancementofficial.quora.com/"><strong>https://vitalrizemaleenhancementofficial.quora.com/</strong></a></p>
<p><a href="https://vitalrizecapsulesreviews.quora.com/"><strong>https://vitalrizecapsulesreviews.quora.com/</strong></a></p>
<p><a href="https://www.provenexpert.com/susan-wilins/"><strong>https://www.provenexpert.com/susan-wilins/</strong></a></p>
<p><a href="https://soundcloud.com/susanwilins/vitalrize-my-advice-improve-male-sexual-health"><strong>https://soundcloud.com/susanwilins/vitalrize-my-advice-improve-male-sexual-health</strong></a></p> |
alexgusevski/ReaderLM-v2-mlx | alexgusevski | "2025-02-24T18:14:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"multilingual",
"base_model:jinaai/ReaderLM-v2",
"base_model:finetune:jinaai/ReaderLM-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2025-02-24T18:02:47Z" | ---
pipeline_tag: text-generation
language:
- multilingual
inference: false
license: cc-by-nc-4.0
library_name: transformers
base_model: jinaai/ReaderLM-v2
tags:
- mlx
---
# alexgusevski/ReaderLM-v2-mlx
The Model [alexgusevski/ReaderLM-v2-mlx](https://huggingface.co/alexgusevski/ReaderLM-v2-mlx) was
converted to MLX format from [jinaai/ReaderLM-v2](https://huggingface.co/jinaai/ReaderLM-v2)
using mlx-lm version **0.21.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("alexgusevski/ReaderLM-v2-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
sasuface/esm2-t12-35M-lora-64-remote-homology-filtered | sasuface | "2024-05-25T06:09:43Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:adapter:facebook/esm2_t12_35M_UR50D",
"license:mit",
"region:us"
] | null | "2024-05-25T06:09:42Z" | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/esm2_t12_35M_UR50D
metrics:
- precision
- recall
- accuracy
model-index:
- name: esm2-t12-35M-lora-64-remote-homology-filtered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2-t12-35M-lora-64-remote-homology-filtered
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5657
- Precision: 0.7166
- Recall: 0.6986
- F1-score: 0.7075
- Accuracy: 0.7141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.6191 | 1.0 | 7969 | 0.6185 | 0.6919 | 0.5824 | 0.6325 | 0.6650 |
| 0.5921 | 2.0 | 15938 | 0.5838 | 0.7201 | 0.6339 | 0.6742 | 0.6968 |
| 0.5874 | 3.0 | 23907 | 0.5751 | 0.7439 | 0.6104 | 0.6705 | 0.7032 |
| 0.5593 | 4.0 | 31876 | 0.5664 | 0.7210 | 0.6833 | 0.7016 | 0.7124 |
| 0.576 | 5.0 | 39845 | 0.5657 | 0.7166 | 0.6986 | 0.7075 | 0.7141 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
SachinKaushik/open_llama_7b_tuned_model | SachinKaushik | "2023-07-14T08:37:39Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | "2023-07-14T05:34:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: open_llama_7b_tuned_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# open_llama_7b_tuned_model
This model is a fine-tuned version of [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
igoroliveira/distilbert-base-uncased-finetuned-cola | igoroliveira | "2023-07-06T20:09:07Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-06T19:11:37Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: igoroliveira/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# igoroliveira/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1959
- Validation Loss: 0.5357
- Train Matthews Correlation: 0.5177
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5247 | 0.4570 | 0.4887 | 0 |
| 0.3259 | 0.4597 | 0.5101 | 1 |
| 0.1959 | 0.5357 | 0.5177 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
learn3r/longt5_xl_sfd_4096_e10 | learn3r | "2024-01-12T09:38:05Z" | 2 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:tau/scrolls",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-01-11T12:23:08Z" | ---
license: apache-2.0
base_model: google/long-t5-tglobal-xl
tags:
- generated_from_trainer
datasets:
- tau/scrolls
model-index:
- name: longt5_xl_sfd_4096_e10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_sfd_4096_e10
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglobal-xl) on the tau/scrolls summ_screen_fd dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0332 | 0.97 | 14 | 2.5424 |
| 2.4105 | 1.95 | 28 | 2.3255 |
| 2.0496 | 2.99 | 43 | 2.3420 |
| 1.7473 | 3.97 | 57 | 2.3520 |
| 1.4007 | 4.94 | 71 | 2.4980 |
| 1.3809 | 5.98 | 86 | 2.4785 |
| 1.1153 | 6.96 | 100 | 2.7326 |
| 0.9129 | 8.0 | 115 | 2.9232 |
| 0.7118 | 8.97 | 129 | 3.0476 |
| 0.5883 | 9.74 | 140 | 3.3644 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
javadr/whisper-tiny-fa | javadr | "2024-01-03T17:27:35Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"fa-asr-leaderboard",
"generated_from_trainer",
"fa",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-25T17:17:55Z" | ---
language:
- fa
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- fa-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Tiny Fa - Javad Razavian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: fa
split: test
args: 'config: fa, split: test'
metrics:
- name: Wer
type: wer
value: 94.28095502498613
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Fa - Javad Razavian
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9459
- Wer: 94.2810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.6309 | 0.08 | 100 | 4.1290 | 140.4220 |
| 2.5371 | 0.16 | 200 | 2.5264 | 128.3176 |
| 1.5224 | 0.24 | 300 | 1.7147 | 120.6830 |
| 1.2351 | 0.33 | 400 | 1.4970 | 112.3542 |
| 1.073 | 0.41 | 500 | 1.3917 | 103.7479 |
| 1.0077 | 0.49 | 600 | 1.3232 | 104.2199 |
| 0.9541 | 0.57 | 700 | 1.2781 | 99.6669 |
| 0.8933 | 0.65 | 800 | 1.2369 | 99.8612 |
| 0.8746 | 0.73 | 900 | 1.2076 | 99.5003 |
| 0.8306 | 0.81 | 1000 | 1.1809 | 99.8890 |
| 0.8309 | 0.89 | 1100 | 1.1583 | 96.5297 |
| 0.7982 | 0.98 | 1200 | 1.1370 | 94.2254 |
| 0.7719 | 1.06 | 1300 | 1.1243 | 96.8351 |
| 0.7799 | 1.14 | 1400 | 1.1065 | 92.6707 |
| 0.7512 | 1.22 | 1500 | 1.0941 | 93.1427 |
| 0.7212 | 1.3 | 1600 | 1.0838 | 94.6696 |
| 0.7315 | 1.38 | 1700 | 1.0709 | 96.0855 |
| 0.7002 | 1.46 | 1800 | 1.0595 | 96.0022 |
| 0.719 | 1.54 | 1900 | 1.0517 | 94.7807 |
| 0.7157 | 1.63 | 2000 | 1.0420 | 95.5303 |
| 0.7004 | 1.71 | 2100 | 1.0337 | 94.2810 |
| 0.6792 | 1.79 | 2200 | 1.0278 | 96.7518 |
| 0.6933 | 1.87 | 2300 | 1.0196 | 95.7801 |
| 0.669 | 1.95 | 2400 | 1.0113 | 98.0566 |
| 0.6627 | 2.03 | 2500 | 1.0063 | 96.8351 |
| 0.655 | 2.11 | 2600 | 1.0006 | 96.0577 |
| 0.6511 | 2.2 | 2700 | 0.9939 | 97.0572 |
| 0.6352 | 2.28 | 2800 | 0.9899 | 95.4470 |
| 0.6339 | 2.36 | 2900 | 0.9874 | 97.2238 |
| 0.6354 | 2.44 | 3000 | 0.9820 | 96.8351 |
| 0.611 | 2.52 | 3100 | 0.9777 | 94.5308 |
| 0.6143 | 2.6 | 3200 | 0.9752 | 99.0006 |
| 0.6242 | 2.68 | 3300 | 0.9729 | 98.7229 |
| 0.6324 | 2.76 | 3400 | 0.9681 | 99.1394 |
| 0.6237 | 2.85 | 3500 | 0.9646 | 96.8906 |
| 0.6285 | 2.93 | 3600 | 0.9621 | 96.1410 |
| 0.5934 | 3.01 | 3700 | 0.9601 | 97.4736 |
| 0.6129 | 3.09 | 3800 | 0.9575 | 92.9761 |
| 0.6154 | 3.17 | 3900 | 0.9575 | 97.5847 |
| 0.6334 | 3.25 | 4000 | 0.9555 | 101.0827 |
| 0.5956 | 3.33 | 4100 | 0.9536 | 94.7529 |
| 0.5956 | 3.41 | 4200 | 0.9507 | 100.3054 |
| 0.6053 | 3.5 | 4300 | 0.9504 | 94.5308 |
| 0.6199 | 3.58 | 4400 | 0.9491 | 95.0861 |
| 0.6064 | 3.66 | 4500 | 0.9482 | 91.8656 |
| 0.6154 | 3.74 | 4600 | 0.9478 | 94.1144 |
| 0.5909 | 3.82 | 4700 | 0.9466 | 91.5047 |
| 0.584 | 3.9 | 4800 | 0.9459 | 94.1144 |
| 0.5935 | 3.98 | 4900 | 0.9459 | 94.0589 |
| 0.5939 | 4.07 | 5000 | 0.9459 | 94.2810 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
KINGeorge2000/DeepRL_unit1 | KINGeorge2000 | "2023-05-25T07:17:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-25T07:16:01Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.66 +/- 17.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ngocminhta/Llama2-MGT-Test | ngocminhta | "2024-06-07T06:04:09Z" | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"generated_from_trainer",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-06T20:19:26Z" | ---
license: llama2
tags:
- generated_from_trainer
model-index:
- name: Llama2-MGT-Test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2-MGT-Test
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4735 | 1.0 | 600 | 1.4509 |
| 1.3866 | 2.0 | 1200 | 1.4155 |
| 1.3135 | 3.0 | 1800 | 1.4140 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.13.3
|
facebook/maskformer-swin-base-coco | facebook | "2024-05-03T07:29:13Z" | 2,735 | 23 | transformers | [
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
MoonshotTim/moonshottim | MoonshotTim | "2024-12-31T05:58:16Z" | 15 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-31T05:20:14Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Moonshottim
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('moonshottim/moonshottim', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
hkivancoral/hushem_1x_deit_small_rms_0001_fold3 | hkivancoral | "2023-11-15T12:12:38Z" | 193 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-15T12:08:34Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_small_rms_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.627906976744186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_small_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6051
- Accuracy: 0.6279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.6050 | 0.2326 |
| 1.8871 | 2.0 | 12 | 1.6526 | 0.2558 |
| 1.8871 | 3.0 | 18 | 1.5125 | 0.3488 |
| 1.4752 | 4.0 | 24 | 1.4964 | 0.2558 |
| 1.4078 | 5.0 | 30 | 1.4513 | 0.2558 |
| 1.4078 | 6.0 | 36 | 1.3471 | 0.2791 |
| 1.2466 | 7.0 | 42 | 1.9402 | 0.3023 |
| 1.2466 | 8.0 | 48 | 1.3364 | 0.3953 |
| 1.2272 | 9.0 | 54 | 1.3189 | 0.3721 |
| 1.0035 | 10.0 | 60 | 1.3183 | 0.4186 |
| 1.0035 | 11.0 | 66 | 1.1773 | 0.4419 |
| 0.7718 | 12.0 | 72 | 1.0313 | 0.5814 |
| 0.7718 | 13.0 | 78 | 1.0831 | 0.6279 |
| 0.4478 | 14.0 | 84 | 1.4838 | 0.5814 |
| 0.14 | 15.0 | 90 | 1.1904 | 0.6744 |
| 0.14 | 16.0 | 96 | 1.2473 | 0.6512 |
| 0.0804 | 17.0 | 102 | 1.4013 | 0.6977 |
| 0.0804 | 18.0 | 108 | 1.4032 | 0.6512 |
| 0.0101 | 19.0 | 114 | 1.4918 | 0.6977 |
| 0.0016 | 20.0 | 120 | 1.4874 | 0.6279 |
| 0.0016 | 21.0 | 126 | 1.4977 | 0.6279 |
| 0.0007 | 22.0 | 132 | 1.5076 | 0.6279 |
| 0.0007 | 23.0 | 138 | 1.5163 | 0.6512 |
| 0.0006 | 24.0 | 144 | 1.5254 | 0.6512 |
| 0.0005 | 25.0 | 150 | 1.5330 | 0.6512 |
| 0.0005 | 26.0 | 156 | 1.5401 | 0.6512 |
| 0.0004 | 27.0 | 162 | 1.5491 | 0.6279 |
| 0.0004 | 28.0 | 168 | 1.5572 | 0.6279 |
| 0.0004 | 29.0 | 174 | 1.5632 | 0.6279 |
| 0.0003 | 30.0 | 180 | 1.5688 | 0.6279 |
| 0.0003 | 31.0 | 186 | 1.5748 | 0.6279 |
| 0.0003 | 32.0 | 192 | 1.5796 | 0.6279 |
| 0.0003 | 33.0 | 198 | 1.5848 | 0.6279 |
| 0.0003 | 34.0 | 204 | 1.5896 | 0.6279 |
| 0.0003 | 35.0 | 210 | 1.5928 | 0.6279 |
| 0.0003 | 36.0 | 216 | 1.5963 | 0.6279 |
| 0.0003 | 37.0 | 222 | 1.5989 | 0.6279 |
| 0.0003 | 38.0 | 228 | 1.6012 | 0.6279 |
| 0.0003 | 39.0 | 234 | 1.6030 | 0.6279 |
| 0.0002 | 40.0 | 240 | 1.6043 | 0.6279 |
| 0.0002 | 41.0 | 246 | 1.6049 | 0.6279 |
| 0.0002 | 42.0 | 252 | 1.6051 | 0.6279 |
| 0.0002 | 43.0 | 258 | 1.6051 | 0.6279 |
| 0.0002 | 44.0 | 264 | 1.6051 | 0.6279 |
| 0.0002 | 45.0 | 270 | 1.6051 | 0.6279 |
| 0.0002 | 46.0 | 276 | 1.6051 | 0.6279 |
| 0.0002 | 47.0 | 282 | 1.6051 | 0.6279 |
| 0.0002 | 48.0 | 288 | 1.6051 | 0.6279 |
| 0.0002 | 49.0 | 294 | 1.6051 | 0.6279 |
| 0.0002 | 50.0 | 300 | 1.6051 | 0.6279 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
toasthans/Twitter_Ohne_HPSearch | toasthans | "2021-12-24T10:20:23Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Twitter_Ohne_HPSearch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Twitter_Ohne_HPSearch
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0262
- Accuracy: 0.8300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 421 | 0.4296 | 0.8181 |
| 0.4451 | 2.0 | 842 | 0.4889 | 0.8240 |
| 0.1761 | 3.0 | 1263 | 0.9503 | 0.8103 |
| 0.0486 | 4.0 | 1684 | 1.0262 | 0.8300 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
massimowww/dqn-SpaceInvadersNoFrameskip-v4 | massimowww | "2022-12-24T07:19:38Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-24T07:10:05Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 1026.00 +/- 379.62
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga massimowww -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga massimowww -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga massimowww
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.025),
('frame_stack', 3),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
davidschulte/ESM_clue_tnews | davidschulte | "2025-03-26T13:55:57Z" | 16 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:clue/clue",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T11:15:03Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- clue/clue
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM clue/clue
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** clue/clue
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** clue/clue
- **Subset [optional]:** tnews
- **Text Column:** sentence
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
leixa/dc849da9-871c-4b1f-a839-1d387cbecd0e | leixa | "2025-02-18T15:15:32Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | "2025-02-18T14:40:12Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc849da9-871c-4b1f-a839-1d387cbecd0e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3f06919224dc39cf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3f06919224dc39cf_train_data.json
type:
field_instruction: user
field_output: chip2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: leixa/dc849da9-871c-4b1f-a839-1d387cbecd0e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1350
micro_batch_size: 4
mlflow_experiment_name: /tmp/3f06919224dc39cf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: 10f957da-90a9-4c27-8bbc-58964a3d7fdf
wandb_project: Gradients-On-112
wandb_run: your_name
wandb_runid: 10f957da-90a9-4c27-8bbc-58964a3d7fdf
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dc849da9-871c-4b1f-a839-1d387cbecd0e
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1350
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.8115 |
| 1.0347 | 0.0121 | 150 | 1.4401 |
| 0.9982 | 0.0241 | 300 | 1.3560 |
| 0.9688 | 0.0362 | 450 | 1.3240 |
| 0.985 | 0.0483 | 600 | 1.2999 |
| 0.9501 | 0.0604 | 750 | 1.2899 |
| 0.9157 | 0.0724 | 900 | 1.2882 |
| 0.8733 | 0.0845 | 1050 | 1.2849 |
| 0.9415 | 0.0966 | 1200 | 1.2683 |
| 0.8801 | 0.1086 | 1350 | 1.2632 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jyuwang/my_awesome_eli5_clm-model | jyuwang | "2024-03-31T20:23:59Z" | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T19:54:58Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5_category
base_model: distilgpt2
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9555 | 1.0 | 1319 | 3.8155 |
| 3.8567 | 2.0 | 2638 | 3.8060 |
| 3.8173 | 3.0 | 3957 | 3.8038 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
davidlyng/SmolLM2-FT-MyDataset | davidlyng | "2025-02-22T21:20:29Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-22T21:19:49Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="davidlyng/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lyngda-individual/huggingface/runs/ngidle7n)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TheBloke/Scarlett-7B-GPTQ | TheBloke | "2023-09-27T13:01:31Z" | 11 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:ajibawa-2023/scarlett-7b",
"base_model:quantized:ajibawa-2023/scarlett-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-08-16T07:59:44Z" | ---
language:
- en
license: other
model_name: Scarlett 7B
inference: false
model_creator: Feynman Innovations
model_link: https://huggingface.co/ajibawa-2023/scarlett-7b
model_type: llama
quantized_by: TheBloke
base_model: ajibawa-2023/scarlett-7b
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Scarlett 7B - GPTQ
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Scarlett 7B](https://huggingface.co/ajibawa-2023/scarlett-7b)
## Description
This repo contains GPTQ model files for [Feynman Innovations's Scarlett 7B](https://huggingface.co/ajibawa-2023/scarlett-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Scarlett-7B-GGML)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/scarlett-7b)
## Prompt template: Scarlett
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
```
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All GPTQ files are made with AutoGPTQ.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have issues with models that use Act Order plus Group Size.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.90 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Scarlett-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Scarlett-7B-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Scarlett-7B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Scarlett-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Scarlett-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Scarlett-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) 0.3.1 or later installed:
```
pip3 install auto-gptq
```
If you have problems installing AutoGPTQ, please build from source instead:
```
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Scarlett-7B-GPTQ"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
# To download from a specific branch, use the revision parameter, as in this example:
# Note that `revision` requires AutoGPTQ 0.3.1 or later!
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Feynman Innovations's Scarlett 7B
**Scarlett: A sentient AI**
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-7b) .
She will not be involved in any kind of role play.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Note:
Kindly use "cat" command to join all pytorch_model.bin parts.
|
texanrangee/111c5ebd-365a-48f8-8be2-b63cf4007c9e | texanrangee | "2025-03-24T00:05:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-23T21:36:45Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iamshnoo/yi-alpaca-2-34b-chinese | iamshnoo | "2023-11-26T20:46:26Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-11-23T05:08:03Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
error577/d1526c12-8a5c-45c2-8a7c-e005e4428b34 | error577 | "2025-02-18T11:49:36Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-18T08:00:46Z" | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1526c12-8a5c-45c2-8a7c-e005e4428b34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: qlora
auto_resume_from_checkpoints: true
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- afb1fdb32bdf02c6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/afb1fdb32bdf02c6_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: error577/d1526c12-8a5c-45c2-8a7c-e005e4428b34
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: null
micro_batch_size: 1
mlflow_experiment_name: /tmp/afb1fdb32bdf02c6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch_4bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.005
wandb_entity: null
wandb_mode: online
wandb_name: d8eda0ee-8aeb-47d4-bebc-dd3dba382021
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8eda0ee-8aeb-47d4-bebc-dd3dba382021
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d1526c12-8a5c-45c2-8a7c-e005e4428b34
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH_4BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 4.5872 |
| 4.0918 | 0.0311 | 100 | 3.9953 |
| 3.8106 | 0.0621 | 200 | 3.5150 |
| 3.6256 | 0.0932 | 300 | 3.4052 |
| 3.2791 | 0.1242 | 400 | 3.2580 |
| 3.2456 | 0.1553 | 500 | 3.0634 |
| 2.9189 | 0.1864 | 600 | 3.0874 |
| 3.0159 | 0.2174 | 700 | 2.9400 |
| 3.0506 | 0.2485 | 800 | 2.9574 |
| 3.1942 | 0.2795 | 900 | 2.8810 |
| 2.7516 | 0.3106 | 1000 | 2.8531 |
| 2.9822 | 0.3417 | 1100 | 2.8689 |
| 2.7943 | 0.3727 | 1200 | 2.8961 |
| 2.7773 | 0.4038 | 1300 | 2.7702 |
| 3.0787 | 0.4349 | 1400 | 2.7362 |
| 2.6754 | 0.4659 | 1500 | 2.7145 |
| 2.882 | 0.4970 | 1600 | 2.6246 |
| 2.8287 | 0.5280 | 1700 | 2.6403 |
| 2.8178 | 0.5591 | 1800 | 2.5918 |
| 2.8114 | 0.5902 | 1900 | 2.6481 |
| 3.0178 | 0.6212 | 2000 | 2.5809 |
| 2.7718 | 0.6523 | 2100 | 2.5701 |
| 2.785 | 0.6833 | 2200 | 2.5290 |
| 2.8581 | 0.7144 | 2300 | 2.5949 |
| 2.8815 | 0.7455 | 2400 | 2.6250 |
| 2.9384 | 0.7765 | 2500 | 2.5516 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cleatherbury/Phi-3-mini-mango-1-llamafied-Q4_K_M-GGUF | cleatherbury | "2024-05-12T03:18:36Z" | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-05-12T03:18:29Z" | ---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
license_link: https://huggingface.co/rhysjones/Phi-3-mini-mango-1-llamafied/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# cleatherbury/Phi-3-mini-mango-1-llamafied-Q4_K_M-GGUF
This model was converted to GGUF format from [`rhysjones/Phi-3-mini-mango-1-llamafied`](https://huggingface.co/rhysjones/Phi-3-mini-mango-1-llamafied) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rhysjones/Phi-3-mini-mango-1-llamafied) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cleatherbury/Phi-3-mini-mango-1-llamafied-Q4_K_M-GGUF --model phi-3-mini-mango-1-llamafied.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cleatherbury/Phi-3-mini-mango-1-llamafied-Q4_K_M-GGUF --model phi-3-mini-mango-1-llamafied.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-mango-1-llamafied.Q4_K_M.gguf -n 128
```
|
Gummybear05/wav2vec2-E30_speed2 | Gummybear05 | "2024-11-20T07:55:55Z" | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-11-19T04:19:04Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E30_speed2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E30_speed2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2995
- Cer: 25.2938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 41.4516 | 0.1289 | 200 | 5.3911 | 100.0 |
| 4.9936 | 0.2579 | 400 | 4.7064 | 100.0 |
| 4.8068 | 0.3868 | 600 | 4.6491 | 100.0 |
| 4.7989 | 0.5158 | 800 | 4.7160 | 100.0 |
| 4.7287 | 0.6447 | 1000 | 4.5872 | 100.0 |
| 4.7102 | 0.7737 | 1200 | 4.5938 | 100.0 |
| 4.6927 | 0.9026 | 1400 | 4.6049 | 100.0 |
| 4.6106 | 1.0316 | 1600 | 4.5710 | 100.0 |
| 4.5281 | 1.1605 | 1800 | 4.4045 | 100.0 |
| 4.2276 | 1.2895 | 2000 | 3.8233 | 74.6240 |
| 3.2563 | 1.4184 | 2200 | 2.8222 | 51.8508 |
| 2.5657 | 1.5474 | 2400 | 2.3714 | 42.7850 |
| 2.2757 | 1.6763 | 2600 | 2.1794 | 41.1163 |
| 2.0428 | 1.8053 | 2800 | 1.9496 | 36.1222 |
| 1.8705 | 1.9342 | 3000 | 1.8052 | 33.6310 |
| 1.6822 | 2.0632 | 3200 | 1.6552 | 31.3514 |
| 1.574 | 2.1921 | 3400 | 1.5774 | 30.2115 |
| 1.4683 | 2.3211 | 3600 | 1.4999 | 28.9424 |
| 1.4039 | 2.4500 | 3800 | 1.4358 | 28.1786 |
| 1.3323 | 2.5790 | 4000 | 1.3441 | 26.1868 |
| 1.3055 | 2.7079 | 4200 | 1.3460 | 25.8813 |
| 1.2428 | 2.8369 | 4400 | 1.3022 | 25.5170 |
| 1.2121 | 2.9658 | 4600 | 1.2995 | 25.2938 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Razer112/Public_Models | Razer112 | "2025-03-24T04:28:39Z" | 0 | 0 | null | [
"RVC",
"vc",
"voice-cloning",
"voice-conversion",
"Voice2Voice",
"audio-to-audio",
"license:other",
"region:us"
] | audio-to-audio | "2024-08-31T02:04:40Z" | ---
license: other
license_name: rvc-models
license_link: LICENSE
pipeline_tag: audio-to-audio
tags:
- RVC
- vc
- voice-cloning
- voice-conversion
- Voice2Voice
---
# A public repo of all voice models that i have made.
## License
These models are licensed under a custom License. Please see the LICENSE file in this repository for full terms.
|
ivt1993/writer-7b | ivt1993 | "2023-11-20T02:01:06Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:hfl/chinese-alpaca-2-7b-16k",
"base_model:adapter:hfl/chinese-alpaca-2-7b-16k",
"region:us"
] | null | "2023-11-15T13:49:48Z" | ---
library_name: peft
base_model: hfl/chinese-alpaca-2-7b-16k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
JoyboyXoXo/ppo-SnowballTarget | JoyboyXoXo | "2023-08-30T11:37:44Z" | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-08-30T11:37:42Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JoyboyXoXo/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ntc-ai/SDXL-LoRA-slider.crying | ntc-ai | "2023-12-29T01:53:22Z" | 111 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2023-12-29T01:53:19Z" |
---
language:
- en
thumbnail: "images/evaluate/crying.../crying_17_3.0.png"
widget:
- text: crying
output:
url: images/crying_17_3.0.png
- text: crying
output:
url: images/crying_19_3.0.png
- text: crying
output:
url: images/crying_20_3.0.png
- text: crying
output:
url: images/crying_21_3.0.png
- text: crying
output:
url: images/crying_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "crying"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - crying (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/crying_17_-3.0.png" width=256 height=256 /> | <img src="images/crying_17_0.0.png" width=256 height=256 /> | <img src="images/crying_17_3.0.png" width=256 height=256 /> |
| <img src="images/crying_19_-3.0.png" width=256 height=256 /> | <img src="images/crying_19_0.0.png" width=256 height=256 /> | <img src="images/crying_19_3.0.png" width=256 height=256 /> |
| <img src="images/crying_20_-3.0.png" width=256 height=256 /> | <img src="images/crying_20_0.0.png" width=256 height=256 /> | <img src="images/crying_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
crying
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.crying', weight_name='crying.safetensors', adapter_name="crying")
# Activate the LoRA
pipe.set_adapters(["crying"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, crying"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 700+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
ICT2214Team7/RoBERTa_conll_epoch_8 | ICT2214Team7 | "2024-06-28T09:02:18Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-28T08:27:29Z" | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: RoBERTa_conll_epoch_8
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9463544261750539
- name: Recall
type: recall
value: 0.9589363850555369
- name: F1
type: f1
value: 0.9526038619075483
- name: Accuracy
type: accuracy
value: 0.9888772974133964
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa_conll_epoch_8
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0813
- Precision: 0.9464
- Recall: 0.9589
- F1: 0.9526
- Accuracy: 0.9889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0799 | 1.0 | 1756 | 0.0700 | 0.9133 | 0.9320 | 0.9225 | 0.9827 |
| 0.0449 | 2.0 | 3512 | 0.0661 | 0.9325 | 0.9440 | 0.9382 | 0.9865 |
| 0.0283 | 3.0 | 5268 | 0.0707 | 0.9275 | 0.9456 | 0.9365 | 0.9852 |
| 0.0203 | 4.0 | 7024 | 0.0622 | 0.9424 | 0.9586 | 0.9504 | 0.9882 |
| 0.0111 | 5.0 | 8780 | 0.0758 | 0.9382 | 0.9549 | 0.9465 | 0.9878 |
| 0.0067 | 6.0 | 10536 | 0.0761 | 0.9395 | 0.9546 | 0.9470 | 0.9880 |
| 0.0031 | 7.0 | 12292 | 0.0821 | 0.9391 | 0.9546 | 0.9468 | 0.9878 |
| 0.0021 | 8.0 | 14048 | 0.0813 | 0.9464 | 0.9589 | 0.9526 | 0.9889 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vmpsergio/c61f1b4c-2b95-4538-aa6c-95c9449fdac8 | vmpsergio | "2025-01-13T19:44:02Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-13T19:43:26Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c61f1b4c-2b95-4538-aa6c-95c9449fdac8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 44664facd5408a4c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/44664facd5408a4c_train_data.json
type:
field_input: choices
field_instruction: full_prompt
field_output: example
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: vmpsergio/c61f1b4c-2b95-4538-aa6c-95c9449fdac8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/44664facd5408a4c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: da4a2513-3912-4f9c-b444-0720fa758cb8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: da4a2513-3912-4f9c-b444-0720fa758cb8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c61f1b4c-2b95-4538-aa6c-95c9449fdac8
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0240 | 1 | 2.3996 |
| 2.029 | 0.1916 | 8 | 0.8817 |
| 0.4159 | 0.3832 | 16 | 0.3534 |
| 0.21 | 0.5749 | 24 | 0.2212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kepinsam/ind-to-bbc-nmt-v8 | kepinsam | "2024-07-13T09:56:15Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:nusatranslation_mt",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-13T08:21:58Z" | ---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
datasets:
- nusatranslation_mt
metrics:
- sacrebleu
model-index:
- name: ind-to-bbc-nmt-v8
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: nusatranslation_mt
type: nusatranslation_mt
config: nusatranslation_mt_btk_ind_source
split: test
args: nusatranslation_mt_btk_ind_source
metrics:
- name: Sacrebleu
type: sacrebleu
value: 31.404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ind-to-bbc-nmt-v8
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the nusatranslation_mt dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1714
- Sacrebleu: 31.404
- Gen Len: 45.259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 5.9718 | 1.0 | 207 | 3.5105 | 24.1454 | 44.5145 |
| 2.5392 | 2.0 | 414 | 1.6246 | 28.1486 | 45.45 |
| 1.46 | 3.0 | 621 | 1.3187 | 30.5425 | 45.375 |
| 1.2013 | 4.0 | 828 | 1.2437 | 31.2443 | 45.2075 |
| 1.0869 | 5.0 | 1035 | 1.2084 | 31.0749 | 45.3445 |
| 1.0083 | 6.0 | 1242 | 1.1851 | 31.167 | 45.35 |
| 0.9563 | 7.0 | 1449 | 1.1811 | 31.2377 | 45.344 |
| 0.9149 | 8.0 | 1656 | 1.1719 | 31.2539 | 45.343 |
| 0.8881 | 9.0 | 1863 | 1.1738 | 31.5399 | 45.145 |
| 0.872 | 10.0 | 2070 | 1.1714 | 31.404 | 45.259 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
capablebear/pokemon-lora | capablebear | "2023-11-03T11:49:40Z" | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-11-03T01:51:02Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - capablebear/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
sulaimank/wav2vec-xlsr-grain-lg_cv_only | sulaimank | "2024-11-03T11:13:57Z" | 63 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-31T21:33:26Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec-xlsr-grain-lg_cv_only
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: lg
split: test[:10%]
args: lg
metrics:
- name: Wer
type: wer
value: 0.22608421715845847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-xlsr-grain-lg_cv_only
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7249
- Wer: 0.2261
- Cer: 0.0663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 1.7903 | 1.0 | 2221 | 0.4623 | 0.4252 | 0.1113 |
| 0.6251 | 2.0 | 4442 | 0.4990 | 0.3665 | 0.1008 |
| 0.528 | 3.0 | 6663 | 0.3703 | 0.3417 | 0.0958 |
| 0.4766 | 4.0 | 8884 | 0.3617 | 0.3209 | 0.0899 |
| 0.4419 | 5.0 | 11105 | 0.3458 | 0.3002 | 0.0853 |
| 0.4112 | 6.0 | 13326 | 0.3946 | 0.3075 | 0.0856 |
| 0.3909 | 7.0 | 15547 | 0.3615 | 0.2984 | 0.0854 |
| 0.3714 | 8.0 | 17768 | 0.3903 | 0.2916 | 0.0820 |
| 0.3537 | 9.0 | 19989 | 0.4078 | 0.2935 | 0.0829 |
| 0.3375 | 10.0 | 22210 | 0.3634 | 0.2886 | 0.0849 |
| 0.3205 | 11.0 | 24431 | 0.3772 | 0.2892 | 0.0801 |
| 0.307 | 12.0 | 26652 | 0.3912 | 0.2810 | 0.0785 |
| 0.2961 | 13.0 | 28873 | 0.3527 | 0.2801 | 0.0803 |
| 0.2833 | 14.0 | 31094 | 0.3524 | 0.2892 | 0.0824 |
| 0.272 | 15.0 | 33315 | 0.3955 | 0.2834 | 0.0809 |
| 0.2608 | 16.0 | 35536 | 0.3707 | 0.2805 | 0.0799 |
| 0.2524 | 17.0 | 37757 | 0.4076 | 0.2834 | 0.0792 |
| 0.2445 | 18.0 | 39978 | 0.4205 | 0.2768 | 0.0790 |
| 0.2304 | 19.0 | 42199 | 0.4796 | 0.2809 | 0.0802 |
| 0.2266 | 20.0 | 44420 | 0.3985 | 0.2768 | 0.0799 |
| 0.2154 | 21.0 | 46641 | 0.4254 | 0.2748 | 0.0788 |
| 0.2098 | 22.0 | 48862 | 0.4124 | 0.2703 | 0.0776 |
| 0.1972 | 23.0 | 51083 | 0.3918 | 0.2728 | 0.0783 |
| 0.1925 | 24.0 | 53304 | 0.4703 | 0.2707 | 0.0783 |
| 0.1842 | 25.0 | 55525 | 0.4228 | 0.2724 | 0.0786 |
| 0.1775 | 26.0 | 57746 | 0.4272 | 0.2765 | 0.0784 |
| 0.1729 | 27.0 | 59967 | 0.4161 | 0.2729 | 0.0780 |
| 0.1656 | 28.0 | 62188 | 0.4232 | 0.2648 | 0.0777 |
| 0.1565 | 29.0 | 64409 | 0.4187 | 0.2691 | 0.0780 |
| 0.1555 | 30.0 | 66630 | 0.4280 | 0.2609 | 0.0757 |
| 0.148 | 31.0 | 68851 | 0.4350 | 0.2669 | 0.0778 |
| 0.1443 | 32.0 | 71072 | 0.4718 | 0.2676 | 0.0782 |
| 0.1407 | 33.0 | 73293 | 0.4996 | 0.2723 | 0.0768 |
| 0.1366 | 34.0 | 75514 | 0.4620 | 0.2701 | 0.0770 |
| 0.1321 | 35.0 | 77735 | 0.5067 | 0.2691 | 0.0762 |
| 0.1288 | 36.0 | 79956 | 0.4975 | 0.2613 | 0.0747 |
| 0.1273 | 37.0 | 82177 | 0.4832 | 0.2584 | 0.0744 |
| 0.1218 | 38.0 | 84398 | 0.5097 | 0.2587 | 0.0759 |
| 0.1183 | 39.0 | 86619 | 0.5145 | 0.2657 | 0.0759 |
| 0.1174 | 40.0 | 88840 | 0.5500 | 0.2599 | 0.0753 |
| 0.1142 | 41.0 | 91061 | 0.5112 | 0.2674 | 0.0761 |
| 0.1107 | 42.0 | 93282 | 0.5121 | 0.2615 | 0.0745 |
| 0.1088 | 43.0 | 95503 | 0.5215 | 0.2605 | 0.0753 |
| 0.1056 | 44.0 | 97724 | 0.4900 | 0.2548 | 0.0735 |
| 0.1046 | 45.0 | 99945 | 0.4887 | 0.2565 | 0.0729 |
| 0.1027 | 46.0 | 102166 | 0.5140 | 0.2480 | 0.0712 |
| 0.0995 | 47.0 | 104387 | 0.5110 | 0.2552 | 0.0726 |
| 0.0967 | 48.0 | 106608 | 0.5228 | 0.2562 | 0.0731 |
| 0.0938 | 49.0 | 108829 | 0.4963 | 0.2464 | 0.0702 |
| 0.0934 | 50.0 | 111050 | 0.5024 | 0.2496 | 0.0710 |
| 0.0898 | 51.0 | 113271 | 0.6114 | 0.2563 | 0.0747 |
| 0.0891 | 52.0 | 115492 | 0.5993 | 0.2575 | 0.0728 |
| 0.0864 | 53.0 | 117713 | 0.6181 | 0.2517 | 0.0721 |
| 0.0849 | 54.0 | 119934 | 0.7066 | 0.2550 | 0.0739 |
| 0.0829 | 55.0 | 122155 | 0.5745 | 0.2491 | 0.0720 |
| 0.0811 | 56.0 | 124376 | 0.5194 | 0.2438 | 0.0701 |
| 0.0804 | 57.0 | 126597 | 0.6308 | 0.2474 | 0.0716 |
| 0.0773 | 58.0 | 128818 | 0.5573 | 0.2428 | 0.0698 |
| 0.076 | 59.0 | 131039 | 0.5476 | 0.2462 | 0.0708 |
| 0.0748 | 60.0 | 133260 | 0.5976 | 0.2440 | 0.0717 |
| 0.0742 | 61.0 | 135481 | 0.6067 | 0.2448 | 0.0714 |
| 0.0725 | 62.0 | 137702 | 0.5574 | 0.2439 | 0.0702 |
| 0.0711 | 63.0 | 139923 | 0.5936 | 0.2409 | 0.0711 |
| 0.0698 | 64.0 | 142144 | 0.6039 | 0.2385 | 0.0715 |
| 0.0683 | 65.0 | 144365 | 0.5694 | 0.2417 | 0.0716 |
| 0.066 | 66.0 | 146586 | 0.6021 | 0.2415 | 0.0701 |
| 0.0653 | 67.0 | 148807 | 0.5839 | 0.2428 | 0.0702 |
| 0.0633 | 68.0 | 151028 | 0.5638 | 0.2353 | 0.0678 |
| 0.0621 | 69.0 | 153249 | 0.5731 | 0.2412 | 0.0696 |
| 0.0613 | 70.0 | 155470 | 0.6641 | 0.2430 | 0.0713 |
| 0.0606 | 71.0 | 157691 | 0.5871 | 0.2396 | 0.0693 |
| 0.0576 | 72.0 | 159912 | 0.6178 | 0.2424 | 0.0708 |
| 0.057 | 73.0 | 162133 | 0.6113 | 0.2356 | 0.0680 |
| 0.0558 | 74.0 | 164354 | 0.5890 | 0.2328 | 0.0683 |
| 0.0555 | 75.0 | 166575 | 0.6186 | 0.2427 | 0.0701 |
| 0.0542 | 76.0 | 168796 | 0.6637 | 0.2438 | 0.0709 |
| 0.0526 | 77.0 | 171017 | 0.6172 | 0.2449 | 0.0701 |
| 0.0519 | 78.0 | 173238 | 0.6267 | 0.2384 | 0.0710 |
| 0.0505 | 79.0 | 175459 | 0.6162 | 0.2366 | 0.0681 |
| 0.0494 | 80.0 | 177680 | 0.6146 | 0.2396 | 0.0688 |
| 0.0486 | 81.0 | 179901 | 0.5919 | 0.2316 | 0.0678 |
| 0.0482 | 82.0 | 182122 | 0.6668 | 0.2363 | 0.0716 |
| 0.0467 | 83.0 | 184343 | 0.6901 | 0.2288 | 0.0682 |
| 0.0457 | 84.0 | 186564 | 0.6474 | 0.2365 | 0.0688 |
| 0.0452 | 85.0 | 188785 | 0.6615 | 0.2352 | 0.0697 |
| 0.0434 | 86.0 | 191006 | 0.6998 | 0.2311 | 0.0683 |
| 0.0423 | 87.0 | 193227 | 0.6605 | 0.2279 | 0.0674 |
| 0.0423 | 88.0 | 195448 | 0.7154 | 0.2361 | 0.0709 |
| 0.0408 | 89.0 | 197669 | 0.6706 | 0.2260 | 0.0658 |
| 0.041 | 90.0 | 199890 | 0.7034 | 0.2263 | 0.0668 |
| 0.0391 | 91.0 | 202111 | 0.6943 | 0.2258 | 0.0659 |
| 0.0387 | 92.0 | 204332 | 0.6964 | 0.2259 | 0.0660 |
| 0.0378 | 93.0 | 206553 | 0.6930 | 0.2278 | 0.0661 |
| 0.0368 | 94.0 | 208774 | 0.7106 | 0.2247 | 0.0661 |
| 0.0372 | 95.0 | 210995 | 0.7001 | 0.2245 | 0.0656 |
| 0.0366 | 96.0 | 213216 | 0.7010 | 0.2247 | 0.0658 |
| 0.0357 | 97.0 | 215437 | 0.7196 | 0.2228 | 0.0661 |
| 0.0351 | 98.0 | 217658 | 0.7143 | 0.2238 | 0.0657 |
| 0.0356 | 99.0 | 219879 | 0.7230 | 0.2262 | 0.0662 |
| 0.0352 | 100.0 | 222100 | 0.7249 | 0.2261 | 0.0663 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.1.0+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
FarhadMadadzade/wav2vec2-large-xlsr-53-english-ser-cosine | FarhadMadadzade | "2024-04-03T07:55:28Z" | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"audio",
"automatic-speech-recognition",
"speech",
"speech-emotion-recognition",
"audio-classification",
"base_model:jonatasgrosman/wav2vec2-large-xlsr-53-english",
"base_model:finetune:jonatasgrosman/wav2vec2-large-xlsr-53-english",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-29T12:50:32Z" | ---
license: apache-2.0
base_model: jonatasgrosman/wav2vec2-large-xlsr-53-english
tags:
- generated_from_trainer
- audio
- automatic-speech-recognition
- speech
- speech-emotion-recognition
- audio-classification
widget:
- example_title: IEMOCAP clip "happy"
src: >-
https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro03_F013.wav
- example_title: IEMOCAP clip "neutral"
src: >-
https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro04_F000.wav
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xlsr-53-english-ser-cosine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-ser-cosine
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4677
- Accuracy: 0.8677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001076429938136877
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 18
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7885 | 0.01 | 10 | 1.7963 | 0.1791 |
| 1.7907 | 0.02 | 20 | 1.7973 | 0.2638 |
| 1.8288 | 0.02 | 30 | 1.7546 | 0.2465 |
| 1.7803 | 0.03 | 40 | 1.7500 | 0.2087 |
| 1.7941 | 0.04 | 50 | 1.6953 | 0.2950 |
| 1.7934 | 0.05 | 60 | 1.6342 | 0.3714 |
| 1.6559 | 0.06 | 70 | 1.6199 | 0.2892 |
| 1.6214 | 0.07 | 80 | 1.5400 | 0.4117 |
| 1.5226 | 0.07 | 90 | 1.3802 | 0.4519 |
| 1.4954 | 0.08 | 100 | 1.3506 | 0.4717 |
| 1.4062 | 0.09 | 110 | 1.3328 | 0.4766 |
| 1.4507 | 0.1 | 120 | 1.1985 | 0.5464 |
| 1.2812 | 0.11 | 130 | 1.2826 | 0.4922 |
| 1.1494 | 0.12 | 140 | 1.0960 | 0.6187 |
| 1.1035 | 0.12 | 150 | 1.1925 | 0.5645 |
| 1.2784 | 0.13 | 160 | 1.0955 | 0.6015 |
| 1.0302 | 0.14 | 170 | 1.0418 | 0.6072 |
| 1.0068 | 0.15 | 180 | 0.9261 | 0.6804 |
| 1.112 | 0.16 | 190 | 1.1529 | 0.5867 |
| 1.0308 | 0.16 | 200 | 0.8637 | 0.7058 |
| 1.0464 | 0.17 | 210 | 0.9205 | 0.6426 |
| 0.9531 | 0.18 | 220 | 0.9363 | 0.6886 |
| 1.0228 | 0.19 | 230 | 0.9637 | 0.6615 |
| 1.1446 | 0.2 | 240 | 1.3015 | 0.5489 |
| 1.1146 | 0.21 | 250 | 0.9328 | 0.6483 |
| 0.849 | 0.21 | 260 | 0.8504 | 0.6746 |
| 0.7977 | 0.22 | 270 | 0.9533 | 0.6697 |
| 0.9397 | 0.23 | 280 | 0.9300 | 0.7083 |
| 0.8625 | 0.24 | 290 | 1.1020 | 0.6401 |
| 1.333 | 0.25 | 300 | 0.9816 | 0.6442 |
| 1.0022 | 0.25 | 310 | 0.8472 | 0.7067 |
| 0.8002 | 0.26 | 320 | 0.7866 | 0.7149 |
| 0.8987 | 0.27 | 330 | 0.7979 | 0.6976 |
| 0.9473 | 0.28 | 340 | 0.8600 | 0.6878 |
| 0.9001 | 0.29 | 350 | 0.8141 | 0.7034 |
| 0.9793 | 0.3 | 360 | 0.9872 | 0.6450 |
| 0.9189 | 0.3 | 370 | 0.8561 | 0.6845 |
| 0.9281 | 0.31 | 380 | 0.9055 | 0.6919 |
| 0.7118 | 0.32 | 390 | 0.7937 | 0.6984 |
| 1.0565 | 0.33 | 400 | 0.7339 | 0.7313 |
| 0.8467 | 0.34 | 410 | 0.8262 | 0.6836 |
| 0.9601 | 0.35 | 420 | 0.7464 | 0.7346 |
| 0.8911 | 0.35 | 430 | 0.7229 | 0.7338 |
| 0.9033 | 0.36 | 440 | 0.7393 | 0.7223 |
| 0.8961 | 0.37 | 450 | 0.7272 | 0.7428 |
| 0.7216 | 0.38 | 460 | 0.7183 | 0.7436 |
| 0.6935 | 0.39 | 470 | 0.8003 | 0.7083 |
| 0.7588 | 0.39 | 480 | 0.8471 | 0.7116 |
| 0.8766 | 0.4 | 490 | 0.6976 | 0.7379 |
| 0.6866 | 0.41 | 500 | 0.6806 | 0.7584 |
| 0.6822 | 0.42 | 510 | 0.7669 | 0.7256 |
| 0.7067 | 0.43 | 520 | 0.6885 | 0.7461 |
| 0.6159 | 0.44 | 530 | 0.7020 | 0.7535 |
| 0.8814 | 0.44 | 540 | 0.7478 | 0.7256 |
| 0.7786 | 0.45 | 550 | 0.6302 | 0.7691 |
| 0.6363 | 0.46 | 560 | 0.6745 | 0.7691 |
| 0.8518 | 0.47 | 570 | 0.6242 | 0.7666 |
| 0.8194 | 0.48 | 580 | 0.7154 | 0.7379 |
| 0.6755 | 0.49 | 590 | 0.7056 | 0.7543 |
| 0.7743 | 0.49 | 600 | 0.6823 | 0.7486 |
| 0.6457 | 0.5 | 610 | 0.7160 | 0.7502 |
| 0.4976 | 0.51 | 620 | 0.8222 | 0.7149 |
| 0.929 | 0.52 | 630 | 0.7318 | 0.7371 |
| 0.7981 | 0.53 | 640 | 0.7417 | 0.7461 |
| 0.7243 | 0.53 | 650 | 0.6831 | 0.7461 |
| 0.7332 | 0.54 | 660 | 0.6273 | 0.7592 |
| 0.7827 | 0.55 | 670 | 0.6158 | 0.7724 |
| 0.7733 | 0.56 | 680 | 0.7515 | 0.7371 |
| 0.8527 | 0.57 | 690 | 0.7200 | 0.7412 |
| 0.8355 | 0.58 | 700 | 0.7738 | 0.7436 |
| 0.5383 | 0.58 | 710 | 0.9081 | 0.7132 |
| 1.0851 | 0.59 | 720 | 0.6135 | 0.7831 |
| 0.7345 | 0.6 | 730 | 0.7032 | 0.7642 |
| 0.6648 | 0.61 | 740 | 0.6146 | 0.7781 |
| 0.612 | 0.62 | 750 | 0.6338 | 0.7732 |
| 0.6101 | 0.62 | 760 | 0.6772 | 0.7740 |
| 0.6498 | 0.63 | 770 | 0.7153 | 0.7601 |
| 0.6258 | 0.64 | 780 | 0.7871 | 0.7329 |
| 0.7943 | 0.65 | 790 | 0.6975 | 0.7691 |
| 0.8176 | 0.66 | 800 | 0.7692 | 0.7313 |
| 0.6682 | 0.67 | 810 | 0.5766 | 0.8012 |
| 0.4808 | 0.67 | 820 | 0.5882 | 0.7847 |
| 0.6331 | 0.68 | 830 | 0.5855 | 0.7896 |
| 0.874 | 0.69 | 840 | 0.7082 | 0.7568 |
| 0.8984 | 0.7 | 850 | 0.6078 | 0.7732 |
| 0.5861 | 0.71 | 860 | 0.6469 | 0.7814 |
| 0.6896 | 0.72 | 870 | 0.6997 | 0.7560 |
| 0.8237 | 0.72 | 880 | 0.6279 | 0.7650 |
| 0.5818 | 0.73 | 890 | 0.6763 | 0.7691 |
| 0.4781 | 0.74 | 900 | 0.6867 | 0.7592 |
| 0.6851 | 0.75 | 910 | 0.6142 | 0.7724 |
| 0.455 | 0.76 | 920 | 0.9159 | 0.7141 |
| 0.808 | 0.76 | 930 | 0.7518 | 0.7617 |
| 1.0634 | 0.77 | 940 | 0.6015 | 0.7839 |
| 0.6956 | 0.78 | 950 | 0.5895 | 0.7872 |
| 0.5169 | 0.79 | 960 | 0.6394 | 0.7773 |
| 0.6213 | 0.8 | 970 | 0.6890 | 0.7699 |
| 0.5506 | 0.81 | 980 | 0.7471 | 0.7560 |
| 0.6233 | 0.81 | 990 | 0.6525 | 0.7872 |
| 0.7666 | 0.82 | 1000 | 0.8002 | 0.7403 |
| 0.5644 | 0.83 | 1010 | 0.7067 | 0.7387 |
| 0.6038 | 0.84 | 1020 | 0.6091 | 0.7823 |
| 0.6211 | 0.85 | 1030 | 0.6749 | 0.7707 |
| 0.6758 | 0.86 | 1040 | 0.7102 | 0.7502 |
| 0.7353 | 0.86 | 1050 | 0.6959 | 0.7560 |
| 0.5687 | 0.87 | 1060 | 0.6831 | 0.7675 |
| 0.5606 | 0.88 | 1070 | 0.5945 | 0.7847 |
| 0.7309 | 0.89 | 1080 | 0.6737 | 0.7412 |
| 0.5951 | 0.9 | 1090 | 0.6574 | 0.7675 |
| 0.6062 | 0.9 | 1100 | 0.6740 | 0.7502 |
| 0.9606 | 0.91 | 1110 | 0.5730 | 0.7839 |
| 0.6625 | 0.92 | 1120 | 0.5922 | 0.7749 |
| 0.7908 | 0.93 | 1130 | 0.5652 | 0.7823 |
| 0.6387 | 0.94 | 1140 | 0.5268 | 0.8118 |
| 0.7141 | 0.95 | 1150 | 0.5628 | 0.7896 |
| 0.5587 | 0.95 | 1160 | 0.6479 | 0.7609 |
| 0.4817 | 0.96 | 1170 | 0.5410 | 0.8044 |
| 0.4444 | 0.97 | 1180 | 0.5950 | 0.8044 |
| 0.6776 | 0.98 | 1190 | 0.5993 | 0.8012 |
| 0.5989 | 0.99 | 1200 | 0.5745 | 0.7987 |
| 0.6334 | 1.0 | 1210 | 0.6220 | 0.7913 |
| 0.5216 | 1.0 | 1220 | 0.5936 | 0.7938 |
| 0.5127 | 1.01 | 1230 | 0.6741 | 0.7839 |
| 0.5632 | 1.02 | 1240 | 0.6501 | 0.7954 |
| 0.5335 | 1.03 | 1250 | 0.5721 | 0.8061 |
| 0.511 | 1.04 | 1260 | 0.5630 | 0.8102 |
| 0.5424 | 1.04 | 1270 | 0.5396 | 0.8135 |
| 0.771 | 1.05 | 1280 | 0.5580 | 0.8012 |
| 0.435 | 1.06 | 1290 | 0.5764 | 0.8036 |
| 0.5203 | 1.07 | 1300 | 0.6032 | 0.7913 |
| 0.4689 | 1.08 | 1310 | 0.6431 | 0.7872 |
| 0.481 | 1.09 | 1320 | 0.6019 | 0.7987 |
| 0.5938 | 1.09 | 1330 | 0.6198 | 0.7938 |
| 0.3972 | 1.1 | 1340 | 0.5842 | 0.8061 |
| 0.368 | 1.11 | 1350 | 0.5066 | 0.8127 |
| 0.4644 | 1.12 | 1360 | 0.6058 | 0.8012 |
| 0.6914 | 1.13 | 1370 | 0.5384 | 0.8217 |
| 0.3341 | 1.13 | 1380 | 0.5535 | 0.8143 |
| 0.5301 | 1.14 | 1390 | 0.5916 | 0.8020 |
| 0.5294 | 1.15 | 1400 | 0.6297 | 0.7938 |
| 0.7029 | 1.16 | 1410 | 0.5581 | 0.8102 |
| 0.322 | 1.17 | 1420 | 0.6066 | 0.7831 |
| 0.6871 | 1.18 | 1430 | 0.5141 | 0.8151 |
| 0.4026 | 1.18 | 1440 | 0.6888 | 0.7716 |
| 0.4484 | 1.19 | 1450 | 0.5499 | 0.8077 |
| 0.3767 | 1.2 | 1460 | 0.4825 | 0.8225 |
| 0.4274 | 1.21 | 1470 | 0.4932 | 0.8274 |
| 0.4584 | 1.22 | 1480 | 0.5168 | 0.8299 |
| 0.5741 | 1.23 | 1490 | 0.6384 | 0.7798 |
| 0.3877 | 1.23 | 1500 | 0.5789 | 0.8044 |
| 0.3734 | 1.24 | 1510 | 0.6415 | 0.7855 |
| 0.7986 | 1.25 | 1520 | 0.5575 | 0.8077 |
| 0.5634 | 1.26 | 1530 | 0.5684 | 0.8143 |
| 0.5136 | 1.27 | 1540 | 0.5393 | 0.8143 |
| 0.5331 | 1.27 | 1550 | 0.5203 | 0.8176 |
| 0.2918 | 1.28 | 1560 | 0.5510 | 0.8151 |
| 0.4425 | 1.29 | 1570 | 0.5783 | 0.8094 |
| 0.4245 | 1.3 | 1580 | 0.5433 | 0.8209 |
| 0.3317 | 1.31 | 1590 | 0.5845 | 0.8085 |
| 0.4583 | 1.32 | 1600 | 0.6147 | 0.7954 |
| 0.3298 | 1.32 | 1610 | 0.6249 | 0.8053 |
| 0.5248 | 1.33 | 1620 | 0.5722 | 0.8094 |
| 0.665 | 1.34 | 1630 | 0.5446 | 0.8217 |
| 0.3917 | 1.35 | 1640 | 0.5316 | 0.8258 |
| 0.4321 | 1.36 | 1650 | 0.5598 | 0.8217 |
| 0.3005 | 1.37 | 1660 | 0.6190 | 0.8151 |
| 0.4992 | 1.37 | 1670 | 0.5546 | 0.8184 |
| 0.586 | 1.38 | 1680 | 0.6416 | 0.7913 |
| 0.6481 | 1.39 | 1690 | 0.5324 | 0.8135 |
| 0.4008 | 1.4 | 1700 | 0.5786 | 0.8012 |
| 0.3463 | 1.41 | 1710 | 0.5145 | 0.8209 |
| 0.4994 | 1.41 | 1720 | 0.5650 | 0.8192 |
| 0.4093 | 1.42 | 1730 | 0.5191 | 0.8365 |
| 0.6375 | 1.43 | 1740 | 0.5734 | 0.8135 |
| 0.2303 | 1.44 | 1750 | 0.5447 | 0.8102 |
| 0.4824 | 1.45 | 1760 | 0.5139 | 0.8250 |
| 0.5439 | 1.46 | 1770 | 0.4979 | 0.8258 |
| 0.4751 | 1.46 | 1780 | 0.4896 | 0.8340 |
| 0.534 | 1.47 | 1790 | 0.4656 | 0.8348 |
| 0.4526 | 1.48 | 1800 | 0.5322 | 0.8316 |
| 0.4618 | 1.49 | 1810 | 0.5216 | 0.8233 |
| 0.3825 | 1.5 | 1820 | 0.4792 | 0.8225 |
| 0.4557 | 1.5 | 1830 | 0.5071 | 0.8118 |
| 0.5725 | 1.51 | 1840 | 0.5152 | 0.8102 |
| 0.7004 | 1.52 | 1850 | 0.5080 | 0.8217 |
| 0.4367 | 1.53 | 1860 | 0.4920 | 0.8357 |
| 0.3682 | 1.54 | 1870 | 0.5253 | 0.8299 |
| 0.4411 | 1.55 | 1880 | 0.6186 | 0.8069 |
| 0.5391 | 1.55 | 1890 | 0.5074 | 0.8283 |
| 0.4673 | 1.56 | 1900 | 0.4858 | 0.8398 |
| 0.3542 | 1.57 | 1910 | 0.4767 | 0.8381 |
| 0.6483 | 1.58 | 1920 | 0.4694 | 0.8373 |
| 0.3837 | 1.59 | 1930 | 0.4678 | 0.8472 |
| 0.363 | 1.6 | 1940 | 0.4684 | 0.8463 |
| 0.6446 | 1.6 | 1950 | 0.4696 | 0.8365 |
| 0.5627 | 1.61 | 1960 | 0.4651 | 0.8472 |
| 0.3733 | 1.62 | 1970 | 0.5138 | 0.8291 |
| 0.5972 | 1.63 | 1980 | 0.5244 | 0.8250 |
| 0.2388 | 1.64 | 1990 | 0.5020 | 0.8266 |
| 0.6279 | 1.64 | 2000 | 0.5865 | 0.8118 |
| 0.5827 | 1.65 | 2010 | 0.5717 | 0.8176 |
| 0.4598 | 1.66 | 2020 | 0.4691 | 0.8439 |
| 0.3817 | 1.67 | 2030 | 0.5084 | 0.8340 |
| 0.2973 | 1.68 | 2040 | 0.4568 | 0.8447 |
| 0.4039 | 1.69 | 2050 | 0.4681 | 0.8505 |
| 0.4572 | 1.69 | 2060 | 0.4718 | 0.8389 |
| 0.3481 | 1.7 | 2070 | 0.4849 | 0.8283 |
| 0.4553 | 1.71 | 2080 | 0.4574 | 0.8414 |
| 0.4055 | 1.72 | 2090 | 0.4640 | 0.8463 |
| 0.4384 | 1.73 | 2100 | 0.5049 | 0.8431 |
| 0.5593 | 1.74 | 2110 | 0.5192 | 0.8513 |
| 0.3486 | 1.74 | 2120 | 0.4764 | 0.8480 |
| 0.4698 | 1.75 | 2130 | 0.4858 | 0.8447 |
| 0.211 | 1.76 | 2140 | 0.4976 | 0.8398 |
| 0.5209 | 1.77 | 2150 | 0.4934 | 0.8472 |
| 0.4281 | 1.78 | 2160 | 0.4714 | 0.8578 |
| 0.3902 | 1.78 | 2170 | 0.4863 | 0.8463 |
| 0.3083 | 1.79 | 2180 | 0.4807 | 0.8431 |
| 0.4642 | 1.8 | 2190 | 0.4712 | 0.8472 |
| 0.2382 | 1.81 | 2200 | 0.4641 | 0.8513 |
| 0.4154 | 1.82 | 2210 | 0.4900 | 0.8447 |
| 0.3637 | 1.83 | 2220 | 0.4790 | 0.8488 |
| 0.4864 | 1.83 | 2230 | 0.4742 | 0.8513 |
| 0.5024 | 1.84 | 2240 | 0.4803 | 0.8529 |
| 0.4139 | 1.85 | 2250 | 0.4672 | 0.8521 |
| 0.4131 | 1.86 | 2260 | 0.4895 | 0.8431 |
| 0.4851 | 1.87 | 2270 | 0.4432 | 0.8529 |
| 0.3846 | 1.88 | 2280 | 0.4417 | 0.8422 |
| 0.3778 | 1.88 | 2290 | 0.4477 | 0.8439 |
| 0.4128 | 1.89 | 2300 | 0.4336 | 0.8513 |
| 0.3755 | 1.9 | 2310 | 0.4678 | 0.8439 |
| 0.4672 | 1.91 | 2320 | 0.4740 | 0.8373 |
| 0.5216 | 1.92 | 2330 | 0.4343 | 0.8472 |
| 0.3469 | 1.92 | 2340 | 0.4542 | 0.8316 |
| 0.3283 | 1.93 | 2350 | 0.4587 | 0.8447 |
| 0.3495 | 1.94 | 2360 | 0.5050 | 0.8348 |
| 0.4518 | 1.95 | 2370 | 0.5309 | 0.8266 |
| 0.3023 | 1.96 | 2380 | 0.5113 | 0.8332 |
| 0.4014 | 1.97 | 2390 | 0.4989 | 0.8332 |
| 0.4963 | 1.97 | 2400 | 0.4539 | 0.8505 |
| 0.3421 | 1.98 | 2410 | 0.4889 | 0.8455 |
| 0.4126 | 1.99 | 2420 | 0.4696 | 0.8463 |
| 0.479 | 2.0 | 2430 | 0.4498 | 0.8513 |
| 0.3319 | 2.01 | 2440 | 0.4686 | 0.8488 |
| 0.2787 | 2.01 | 2450 | 0.4650 | 0.8447 |
| 0.2105 | 2.02 | 2460 | 0.4665 | 0.8505 |
| 0.4944 | 2.03 | 2470 | 0.4667 | 0.8488 |
| 0.2236 | 2.04 | 2480 | 0.4678 | 0.8463 |
| 0.3076 | 2.05 | 2490 | 0.4621 | 0.8513 |
| 0.2813 | 2.06 | 2500 | 0.4451 | 0.8562 |
| 0.2207 | 2.06 | 2510 | 0.4559 | 0.8562 |
| 0.3693 | 2.07 | 2520 | 0.4634 | 0.8513 |
| 0.3682 | 2.08 | 2530 | 0.4390 | 0.8562 |
| 0.2618 | 2.09 | 2540 | 0.4417 | 0.8529 |
| 0.3139 | 2.1 | 2550 | 0.4618 | 0.8529 |
| 0.1739 | 2.11 | 2560 | 0.4938 | 0.8488 |
| 0.4258 | 2.11 | 2570 | 0.4574 | 0.8496 |
| 0.2136 | 2.12 | 2580 | 0.4495 | 0.8529 |
| 0.2625 | 2.13 | 2590 | 0.4555 | 0.8570 |
| 0.3161 | 2.14 | 2600 | 0.4696 | 0.8537 |
| 0.2515 | 2.15 | 2610 | 0.4649 | 0.8661 |
| 0.3097 | 2.15 | 2620 | 0.4474 | 0.8685 |
| 0.3544 | 2.16 | 2630 | 0.4458 | 0.8603 |
| 0.2967 | 2.17 | 2640 | 0.4555 | 0.8669 |
| 0.4015 | 2.18 | 2650 | 0.4486 | 0.8652 |
| 0.079 | 2.19 | 2660 | 0.4624 | 0.8620 |
| 0.1754 | 2.2 | 2670 | 0.4805 | 0.8587 |
| 0.1854 | 2.2 | 2680 | 0.4803 | 0.8628 |
| 0.3181 | 2.21 | 2690 | 0.4792 | 0.8595 |
| 0.0808 | 2.22 | 2700 | 0.4740 | 0.8628 |
| 0.2027 | 2.23 | 2710 | 0.4846 | 0.8587 |
| 0.3211 | 2.24 | 2720 | 0.5074 | 0.8505 |
| 0.2448 | 2.25 | 2730 | 0.5276 | 0.8414 |
| 0.3618 | 2.25 | 2740 | 0.5133 | 0.8488 |
| 0.1822 | 2.26 | 2750 | 0.5002 | 0.8578 |
| 0.3095 | 2.27 | 2760 | 0.4827 | 0.8603 |
| 0.0762 | 2.28 | 2770 | 0.4792 | 0.8644 |
| 0.187 | 2.29 | 2780 | 0.4897 | 0.8644 |
| 0.5779 | 2.29 | 2790 | 0.4901 | 0.8652 |
| 0.292 | 2.3 | 2800 | 0.4764 | 0.8603 |
| 0.1865 | 2.31 | 2810 | 0.4798 | 0.8644 |
| 0.3594 | 2.32 | 2820 | 0.4837 | 0.8620 |
| 0.421 | 2.33 | 2830 | 0.4812 | 0.8562 |
| 0.1173 | 2.34 | 2840 | 0.4708 | 0.8603 |
| 0.278 | 2.34 | 2850 | 0.4693 | 0.8685 |
| 0.2294 | 2.35 | 2860 | 0.4724 | 0.8628 |
| 0.243 | 2.36 | 2870 | 0.4749 | 0.8620 |
| 0.3979 | 2.37 | 2880 | 0.4633 | 0.8628 |
| 0.4518 | 2.38 | 2890 | 0.4603 | 0.8669 |
| 0.2739 | 2.38 | 2900 | 0.4625 | 0.8685 |
| 0.1782 | 2.39 | 2910 | 0.4652 | 0.8677 |
| 0.3536 | 2.4 | 2920 | 0.4613 | 0.8644 |
| 0.0904 | 2.41 | 2930 | 0.4642 | 0.8611 |
| 0.2315 | 2.42 | 2940 | 0.4613 | 0.8661 |
| 0.1236 | 2.43 | 2950 | 0.4628 | 0.8652 |
| 0.1842 | 2.43 | 2960 | 0.4706 | 0.8620 |
| 0.2414 | 2.44 | 2970 | 0.4683 | 0.8652 |
| 0.3419 | 2.45 | 2980 | 0.4645 | 0.8677 |
| 0.2877 | 2.46 | 2990 | 0.4657 | 0.8636 |
| 0.2524 | 2.47 | 3000 | 0.4701 | 0.8652 |
| 0.1731 | 2.48 | 3010 | 0.4733 | 0.8644 |
| 0.1731 | 2.48 | 3020 | 0.4830 | 0.8595 |
| 0.0921 | 2.49 | 3030 | 0.4904 | 0.8603 |
| 0.1593 | 2.5 | 3040 | 0.4836 | 0.8595 |
| 0.467 | 2.51 | 3050 | 0.4706 | 0.8628 |
| 0.4225 | 2.52 | 3060 | 0.4598 | 0.8644 |
| 0.1251 | 2.52 | 3070 | 0.4511 | 0.8694 |
| 0.2181 | 2.53 | 3080 | 0.4487 | 0.8735 |
| 0.2247 | 2.54 | 3090 | 0.4452 | 0.8767 |
| 0.3722 | 2.55 | 3100 | 0.4469 | 0.8759 |
| 0.1069 | 2.56 | 3110 | 0.4536 | 0.8735 |
| 0.2174 | 2.57 | 3120 | 0.4571 | 0.8710 |
| 0.2586 | 2.57 | 3130 | 0.4626 | 0.8685 |
| 0.2803 | 2.58 | 3140 | 0.4665 | 0.8677 |
| 0.4484 | 2.59 | 3150 | 0.4581 | 0.8694 |
| 0.3104 | 2.6 | 3160 | 0.4539 | 0.8735 |
| 0.2411 | 2.61 | 3170 | 0.4531 | 0.8726 |
| 0.2157 | 2.62 | 3180 | 0.4565 | 0.8694 |
| 0.2342 | 2.62 | 3190 | 0.4549 | 0.8694 |
| 0.2921 | 2.63 | 3200 | 0.4570 | 0.8677 |
| 0.1988 | 2.64 | 3210 | 0.4590 | 0.8677 |
| 0.2142 | 2.65 | 3220 | 0.4601 | 0.8661 |
| 0.1666 | 2.66 | 3230 | 0.4652 | 0.8661 |
| 0.2296 | 2.66 | 3240 | 0.4709 | 0.8611 |
| 0.3847 | 2.67 | 3250 | 0.4676 | 0.8636 |
| 0.4149 | 2.68 | 3260 | 0.4654 | 0.8636 |
| 0.2602 | 2.69 | 3270 | 0.4614 | 0.8661 |
| 0.3786 | 2.7 | 3280 | 0.4605 | 0.8661 |
| 0.3509 | 2.71 | 3290 | 0.4590 | 0.8661 |
| 0.2254 | 2.71 | 3300 | 0.4564 | 0.8677 |
| 0.1775 | 2.72 | 3310 | 0.4553 | 0.8694 |
| 0.2269 | 2.73 | 3320 | 0.4546 | 0.8669 |
| 0.1792 | 2.74 | 3330 | 0.4549 | 0.8644 |
| 0.1107 | 2.75 | 3340 | 0.4580 | 0.8661 |
| 0.2062 | 2.75 | 3350 | 0.4598 | 0.8636 |
| 0.1641 | 2.76 | 3360 | 0.4621 | 0.8652 |
| 0.18 | 2.77 | 3370 | 0.4651 | 0.8652 |
| 0.0959 | 2.78 | 3380 | 0.4673 | 0.8661 |
| 0.217 | 2.79 | 3390 | 0.4672 | 0.8652 |
| 0.3293 | 2.8 | 3400 | 0.4673 | 0.8644 |
| 0.2691 | 2.8 | 3410 | 0.4669 | 0.8644 |
| 0.1945 | 2.81 | 3420 | 0.4659 | 0.8652 |
| 0.2712 | 2.82 | 3430 | 0.4660 | 0.8677 |
| 0.2287 | 2.83 | 3440 | 0.4663 | 0.8677 |
| 0.2103 | 2.84 | 3450 | 0.4661 | 0.8669 |
| 0.2713 | 2.85 | 3460 | 0.4663 | 0.8669 |
| 0.3182 | 2.85 | 3470 | 0.4665 | 0.8677 |
| 0.1698 | 2.86 | 3480 | 0.4668 | 0.8669 |
| 0.2663 | 2.87 | 3490 | 0.4669 | 0.8677 |
| 0.2091 | 2.88 | 3500 | 0.4670 | 0.8685 |
| 0.1406 | 2.89 | 3510 | 0.4677 | 0.8669 |
| 0.16 | 2.89 | 3520 | 0.4682 | 0.8661 |
| 0.1413 | 2.9 | 3530 | 0.4686 | 0.8661 |
| 0.3499 | 2.91 | 3540 | 0.4690 | 0.8661 |
| 0.205 | 2.92 | 3550 | 0.4688 | 0.8661 |
| 0.3849 | 2.93 | 3560 | 0.4684 | 0.8661 |
| 0.209 | 2.94 | 3570 | 0.4680 | 0.8669 |
| 0.1985 | 2.94 | 3580 | 0.4678 | 0.8677 |
| 0.1989 | 2.95 | 3590 | 0.4678 | 0.8677 |
| 0.2031 | 2.96 | 3600 | 0.4677 | 0.8677 |
| 0.2401 | 2.97 | 3610 | 0.4677 | 0.8677 |
| 0.2717 | 2.98 | 3620 | 0.4678 | 0.8677 |
| 0.2821 | 2.99 | 3630 | 0.4678 | 0.8677 |
| 0.1735 | 2.99 | 3640 | 0.4677 | 0.8677 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.1.dev0
- Tokenizers 0.15.2 |
abdullah-alnahas/llama-3-8b-4bit-demo | abdullah-alnahas | "2024-05-07T09:30:00Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-05-06T18:09:11Z" | ---
license: apache-2.0
---
|
ADHIZ/omni_vinnu | ADHIZ | "2024-11-12T12:55:20Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-12T12:54:29Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
togsoo/tj | togsoo | "2025-03-09T09:05:03Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2025-03-09T09:05:03Z" | ---
license: artistic-2.0
---
|
jingtingjian/test-opt-125m-c4-autogptq-3bit | jingtingjian | "2024-03-15T05:54:27Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | "2024-03-15T05:54:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/hijiri_byakuren_touhou | LarryAIDraw | "2023-10-12T19:29:16Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-10-12T19:27:11Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/131068/hijiribyakuren-touhou |
nrslearning/gita-text-generation-gpt2 | nrslearning | "2025-01-29T12:33:40Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-29T12:32:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/opus-v1.2-7b-exl2 | bartowski | "2024-02-29T06:02:36Z" | 3 | 0 | null | [
"unsloth",
"axolotl",
"text-generation",
"en",
"region:us"
] | text-generation | "2024-02-29T05:48:16Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
quantized_by: bartowski
---
## Exllama v2 Quantizations of opus-v1.2-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/dreamgen/opus-v1.2-7b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/opus-v1.2-7b-exl2 opus-v1.2-7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `opus-v1.2-7b-exl2`:
```shell
mkdir opus-v1.2-7b-exl2
huggingface-cli download bartowski/opus-v1.2-7b-exl2 --local-dir opus-v1.2-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir opus-v1.2-7b-exl2-6_5
huggingface-cli download bartowski/opus-v1.2-7b-exl2 --revision 6_5 --local-dir opus-v1.2-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir opus-v1.2-7b-exl2-6.5
huggingface-cli download bartowski/opus-v1.2-7b-exl2 --revision 6_5 --local-dir opus-v1.2-7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
LarryAIDraw/Urayodo_v2 | LarryAIDraw | "2023-10-26T21:59:06Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-10-26T21:55:43Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/162422/ura-yodo-oshiroprojectrere |
Hachipo/qwen2.5-0.5B_educational_instruct_top6000_codeonly | Hachipo | "2024-12-19T21:25:49Z" | 149 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-19T07:31:26Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
neuralmagic/llama2.c-stories110M-pruned2.4 | neuralmagic | "2024-03-05T15:46:11Z" | 88 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"nm-vllm",
"sparse",
"arxiv:2301.00774",
"base_model:Xenova/llama2.c-stories110M",
"base_model:finetune:Xenova/llama2.c-stories110M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-16T18:01:44Z" | ---
base_model: Xenova/llama2.c-stories110M
inference: true
model_type: llama
quantized_by: mgoin
tags:
- nm-vllm
- sparse
---
## llama2.c-stories110M-pruned2.4
This repo contains model files for [llama2.c 110M tinystories](https://huggingface.co/Xenova/llama2.c-stories110M) optimized for [NM-vLLM](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs.
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [NM-vLLM](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage:
```bash
pip install nm-vllm[sparse]
```
Run in a Python pipeline for local inference:
```python
from vllm import LLM, SamplingParams
model = LLM("nm-testing/llama2.c-stories110M-pruned2.4", sparsity="semi_structured_sparse_w16a16")
prompt = "My name is "
sampling_params = SamplingParams(max_tokens=100,temperature=0)
outputs = model.generate(prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
""""
3 years old. My name is Sam. I love to play with my toys. I love to play with my toys.
One day, my mom takes me to the park. She brings a big bag. She takes out a big bag. It is full of things.
At the park, Sam sees a big box. He sees it was made from paper. He sees it is made from paper. He sees it is made from paper.
Sam's mom takes outs
"""
```
## Prompt template
N/A
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
Install [SparseML](https://github.com/neuralmagic/sparseml):
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
```
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:
```python
import sparseml.transformers
original_model_name = "Xenova/llama2.c-stories110M"
calibration_dataset = "open_platypus"
output_directory = "output/"
recipe = """
test_stage:
obcq_modifiers:
SparseGPTModifier:
sparsity: 0.5
sequential_update: true
quantize: false
mask_structure: '2:4'
targets: ['re:model.layers.\d*$']
"""
# Apply SparseGPT to the model
sparseml.transformers.oneshot(
model=original_model_name,
dataset=calibration_dataset,
recipe=recipe,
output_dir=output_directory,
)
```
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
Chenxi-Chelsea-Liu/whisper-small-noisy-hi | Chenxi-Chelsea-Liu | "2024-01-17T01:49:06Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-16T14:58:10Z" | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-noisy-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-noisy-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5460
- Wer: 74.5720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5752 | 0.46 | 50 | 2.2665 | 120.7418 |
| 1.6855 | 0.92 | 100 | 1.6174 | 92.1494 |
| 1.4464 | 1.38 | 150 | 1.4430 | 92.0543 |
| 1.3211 | 1.83 | 200 | 1.3179 | 88.5094 |
| 1.1732 | 2.29 | 250 | 1.2025 | 86.2182 |
| 1.0507 | 2.75 | 300 | 1.0736 | 83.7628 |
| 0.8575 | 3.21 | 350 | 0.9902 | 80.8404 |
| 0.8096 | 3.67 | 400 | 0.9516 | 80.1833 |
| 0.7257 | 4.13 | 450 | 0.9286 | 78.7740 |
| 0.6689 | 4.59 | 500 | 0.9091 | 77.0621 |
| 0.6331 | 5.05 | 550 | 0.9014 | 76.5087 |
| 0.5123 | 5.5 | 600 | 0.9030 | 74.3213 |
| 0.505 | 5.96 | 650 | 0.8833 | 76.0851 |
| 0.3716 | 6.42 | 700 | 0.9274 | 75.5144 |
| 0.3759 | 6.88 | 750 | 0.9227 | 74.1657 |
| 0.2658 | 7.34 | 800 | 0.9754 | 77.3993 |
| 0.2624 | 7.8 | 850 | 0.9800 | 74.9784 |
| 0.1755 | 8.26 | 900 | 1.0364 | 74.5807 |
| 0.1771 | 8.72 | 950 | 1.0549 | 76.0678 |
| 0.1239 | 9.17 | 1000 | 1.1081 | 74.8314 |
| 0.112 | 9.63 | 1050 | 1.1373 | 74.9524 |
| 0.0942 | 10.09 | 1100 | 1.1697 | 75.2205 |
| 0.0691 | 10.55 | 1150 | 1.2068 | 76.6384 |
| 0.0659 | 11.01 | 1200 | 1.2280 | 75.6095 |
| 0.0417 | 11.47 | 1250 | 1.2840 | 74.9697 |
| 0.0416 | 11.93 | 1300 | 1.3025 | 75.9035 |
| 0.025 | 12.39 | 1350 | 1.3342 | 76.1110 |
| 0.0258 | 12.84 | 1400 | 1.3580 | 74.9438 |
| 0.0182 | 13.3 | 1450 | 1.4077 | 75.9467 |
| 0.0154 | 13.76 | 1500 | 1.4214 | 75.1167 |
| 0.0131 | 14.22 | 1550 | 1.4525 | 74.8660 |
| 0.0119 | 14.68 | 1600 | 1.4903 | 74.7709 |
| 0.011 | 15.14 | 1650 | 1.5147 | 75.0476 |
| 0.0079 | 15.6 | 1700 | 1.5375 | 75.9727 |
| 0.0087 | 16.06 | 1750 | 1.5460 | 74.5720 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.12.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
shanhy/xlm-roberta-base_kin-hau-eng_train_spearman_corr | shanhy | "2024-02-17T00:11:36Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-17T00:10:55Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_kin-hau-eng_train_spearman_corr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_kin-hau-eng_train_spearman_corr
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0363
- Spearman Corr: 0.7305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.53 | 200 | 0.0444 | 0.6614 |
| No log | 1.06 | 400 | 0.0370 | 0.6910 |
| No log | 1.6 | 600 | 0.0453 | 0.6924 |
| 0.0401 | 2.13 | 800 | 0.0353 | 0.6988 |
| 0.0401 | 2.66 | 1000 | 0.0359 | 0.7081 |
| 0.0401 | 3.19 | 1200 | 0.0368 | 0.7005 |
| 0.0401 | 3.72 | 1400 | 0.0356 | 0.7153 |
| 0.0257 | 4.26 | 1600 | 0.0320 | 0.7303 |
| 0.0257 | 4.79 | 1800 | 0.0304 | 0.7219 |
| 0.0257 | 5.32 | 2000 | 0.0431 | 0.7236 |
| 0.0257 | 5.85 | 2200 | 0.0335 | 0.7280 |
| 0.019 | 6.38 | 2400 | 0.0313 | 0.7130 |
| 0.019 | 6.91 | 2600 | 0.0309 | 0.7331 |
| 0.019 | 7.45 | 2800 | 0.0308 | 0.7338 |
| 0.019 | 7.98 | 3000 | 0.0329 | 0.7391 |
| 0.0143 | 8.51 | 3200 | 0.0340 | 0.7305 |
| 0.0143 | 9.04 | 3400 | 0.0358 | 0.7315 |
| 0.0143 | 9.57 | 3600 | 0.0347 | 0.7422 |
| 0.011 | 10.11 | 3800 | 0.0378 | 0.7414 |
| 0.011 | 10.64 | 4000 | 0.0295 | 0.7383 |
| 0.011 | 11.17 | 4200 | 0.0341 | 0.7410 |
| 0.011 | 11.7 | 4400 | 0.0404 | 0.7404 |
| 0.0086 | 12.23 | 4600 | 0.0343 | 0.7345 |
| 0.0086 | 12.77 | 4800 | 0.0333 | 0.7352 |
| 0.0086 | 13.3 | 5000 | 0.0376 | 0.7313 |
| 0.0086 | 13.83 | 5200 | 0.0363 | 0.7305 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ramyasri0809/english_hindi | ramyasri0809 | "2025-02-18T12:06:50Z" | 0 | 0 | null | [
"translation",
"english-to-hindi",
"en",
"hi",
"dataset:your-dataset-name",
"base_model:ramyasri0809/english_hindi",
"base_model:finetune:ramyasri0809/english_hindi",
"license:mit",
"region:us"
] | translation | "2025-02-18T10:32:09Z" | ---
license: mit
language:
- en
- hi
base_model:
- ramyasri0809/english_hindi
pipeline_tag: translation
tags:
- translation
- english-to-hindi
datasets:
- your-dataset-name
metrics:
- accuracy
---
# English to Hindi Translation Model
This model translates English text into Hindi. It is trained using [dataset name] and fine-tuned from [base model, if applicable].
## How to Use
```python
from transformers import pipeline
translator = pipeline("translation", model="ramyasri0809/english_hindi")
result = translator("Hello, how are you?")
print(result[0]['translation_text']) |
marialvsantiago/4e22e67f-1c2f-44a3-98c4-01690d719424 | marialvsantiago | "2025-01-17T11:00:16Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | "2025-01-17T10:59:58Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4e22e67f-1c2f-44a3-98c4-01690d719424
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6eb277e3a89b912e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6eb277e3a89b912e_train_data.json
type:
field_input: keywords
field_instruction: abstract
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: marialvsantiago/4e22e67f-1c2f-44a3-98c4-01690d719424
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/6eb277e3a89b912e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc52cd4b-5fb1-4bdc-90b1-84bc37e61a49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc52cd4b-5fb1-4bdc-90b1-84bc37e61a49
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4e22e67f-1c2f-44a3-98c4-01690d719424
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0044 | 1 | nan |
| 0.0 | 0.0221 | 5 | nan |
| 0.0 | 0.0443 | 10 | nan |
| 0.0 | 0.0664 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
minosu/godot_dodo_4x_60k_llama_7b | minosu | "2023-04-23T19:10:00Z" | 21 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-20T22:19:40Z" | # godot_dodo_4x_60k_llama_7b
## Model details
Trained in April 2023.
Godot-Dodo models are instruction-following models finetuned from LLaMA models.
Please refer to the README of the [GitHub repository](https://github.com/minosvasilias/godot-dodo) for detailed information.
### Evaluation datasets
The model was evaluated using code instruction prompts. More details in the [GitHub repository](https://github.com/minosvasilias/godot-dodo).
### Training dataset
The model was trained on a 60k rows instruction following dataset, which is released in the [Github repository](https://github.com/minosvasilias/godot-dodo).
|
nbninh/c3c11490-b58f-41bd-a04e-4ef2fef4bea0 | nbninh | "2025-01-22T23:16:14Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T22:35:17Z" | ---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3c11490-b58f-41bd-a04e-4ef2fef4bea0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74a239d9a8cfd97c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74a239d9a8cfd97c_train_data.json
type:
field_instruction: prompt
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/c3c11490-b58f-41bd-a04e-4ef2fef4bea0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/74a239d9a8cfd97c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d983d4c6-571f-40f8-b37b-c95ef79b9703
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d983d4c6-571f-40f8-b37b-c95ef79b9703
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c3c11490-b58f-41bd-a04e-4ef2fef4bea0
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4712 | 0.8197 | 200 | 1.2649 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheBloke/Chronos-70B-v2-AWQ | TheBloke | "2023-11-09T18:20:39Z" | 12 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chat",
"roleplay",
"storywriting",
"base_model:elinas/chronos-70b-v2",
"base_model:quantized:elinas/chronos-70b-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-09-19T04:20:32Z" | ---
license: cc-by-nc-4.0
tags:
- chat
- roleplay
- storywriting
model_name: Chronos 70B v2
base_model: elinas/chronos-70b-v2
inference: false
model_creator: Elinas
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos 70B v2 - AWQ
- Model creator: [Elinas](https://huggingface.co/elinas)
- Original model: [Chronos 70B v2](https://huggingface.co/elinas/chronos-70b-v2)
<!-- description start -->
## Description
This repo contains AWQ model files for [Elinas's Chronos 70B v2](https://huggingface.co/elinas/chronos-70b-v2).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chronos-70B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronos-70B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-70B-v2-GGUF)
* [Elinas's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-70b-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Elinas's Chronos 70B v2](https://huggingface.co/elinas/chronos-70b-v2).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Chronos-70B-v2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Chronos-70B-v2-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Chronos-70B-v2-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Chronos-70B-v2-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Elinas's Chronos 70B v2
# chronos-70b-v2
This is the FP16 PyTorch / HF version of **chronos-70b-v2** based on the **Llama v2 Base** model. This version will **not fit on a consumer GPU**, use a quantized type of model from those linked below!
Big thank you to the Pygmalion team for providing compute. Reach out to me if you would like individual credit.
This model is primarily focused on chat, roleplay, storywriting, with significantly improved reasoning and logic. It does not have any form of censorship, please use responsibly.
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **LLAMA 2 COMMUNITY LICENSE AGREEMENT**. If you'd like to discuss using it for your business, contact Elinas through Discord **elinas**, or X (Twitter) **@officialelinas**.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
At the moment, only 70b models released will be under this license and the terms may change at any time (ie. a more permissive license allowing commercial use).
## Model Usage
This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE Alpaca instruction mode:
```
### Instruction:
Your instruction or question here.
### Response:
```
Not using the format will make the model perform significantly worse than intended.
## Tips
Sampling and settings can make a significant difference for this model, so play around with them. I was also informed by a user that if you are using **KoboldCPP** that using the flag
`--unbantokens` may improve model performance **significantly**. This has not been tested by myself, but that is something to keep in mind.
## Quantized Versions for Consumer GPU Usage
[LlamaCPP Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-70B-v2-GGUF)
[GPTQ Quantized Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-70B-v2-GPTQ)
**Support Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
|
kws/dqn-SpaceInvadersNoFrameskip-v4 | kws | "2022-08-03T07:43:27Z" | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-08-03T07:42:45Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 603.00 +/- 194.90
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kws -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kws
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
lesso06/6a94e0b0-b85d-49c8-959f-81f897edcf07 | lesso06 | "2025-03-06T21:46:09Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf",
"base_model:adapter:NousResearch/CodeLlama-13b-hf",
"region:us"
] | null | "2025-03-06T11:39:20Z" | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a94e0b0-b85d-49c8-959f-81f897edcf07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 6a94e0b0-b85d-49c8-959f-81f897edcf07
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf](https://huggingface.co/NousResearch/CodeLlama-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000206
- train_batch_size: 4
- eval_batch_size: 4
- seed: 60
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 2.4819 |
| 16.6711 | 0.1657 | 500 | 2.0871 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
uriel-482/ladron | uriel-482 | "2025-03-05T15:41:06Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-05T15:00:33Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
cleanrl/Pong-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1 | cleanrl | "2023-01-14T11:20:28Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-14T11:20:25Z" | ---
tags:
- Pong-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v5
type: Pong-v5
metrics:
- type: mean_reward
value: 19.90 +/- 1.14
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Pong-v5**
This is a trained model of a PPO agent playing Pong-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Pong-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pong-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/Pong-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pong-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Pong-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Pong-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
free21cf/Qwen2.5_1.5B_MED_Instruct | free21cf | "2025-02-26T06:30:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-26T06:28:25Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fine-tuned/jinaai_jina-embeddings-v2-base-en-08082024-msqc-webapp | fine-tuned | "2024-08-07T16:19:54Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Travel",
"Accommodation",
"Luxury",
"Airbnb",
"Indonesia",
"custom_code",
"en",
"dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-08082024-msqc-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-08-07T16:19:38Z" | ---
license: apache-2.0
datasets:
- fine-tuned/jinaai_jina-embeddings-v2-base-en-08082024-msqc-webapp
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Travel
- Accommodation
- Luxury
- Airbnb
- Indonesia
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
travel and accommodation
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-08082024-msqc-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
giantkylin/my_awesome_eli5_clm-model | giantkylin | "2023-10-08T01:57:18Z" | 220 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-05T02:15:13Z" | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8729 | 1.0 | 1123 | 3.7480 |
| 3.7796 | 2.0 | 2246 | 3.7334 |
| 3.7337 | 3.0 | 3369 | 3.7307 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ClarenceDan/3424ccbd-54ac-4fc5-84d8-c7ffb5d819c6 | ClarenceDan | "2025-01-15T12:42:55Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2025-01-15T12:41:13Z" | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3424ccbd-54ac-4fc5-84d8-c7ffb5d819c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 590fd4cbceee3791_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/590fd4cbceee3791_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/3424ccbd-54ac-4fc5-84d8-c7ffb5d819c6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/590fd4cbceee3791_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3424ccbd-54ac-4fc5-84d8-c7ffb5d819c6
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0971 | 0.0006 | 1 | 4.7593 |
| 5.3174 | 0.0017 | 3 | 4.7457 |
| 3.7602 | 0.0034 | 6 | 4.6991 |
| 4.2457 | 0.0051 | 9 | 4.4776 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MayBashendy/ArabicNewSplits8_FineTuningAraBERT_noAug_task3_organization | MayBashendy | "2025-01-14T11:51:02Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-14T11:48:25Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_FineTuningAraBERT_noAug_task3_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_FineTuningAraBERT_noAug_task3_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5796
- Qwk: 0.3828
- Mse: 0.5796
- Rmse: 0.7613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.6667 | 2 | 3.7564 | 0.0 | 3.7564 | 1.9382 |
| No log | 1.3333 | 4 | 2.1125 | -0.0464 | 2.1125 | 1.4535 |
| No log | 2.0 | 6 | 0.9384 | 0.0740 | 0.9384 | 0.9687 |
| No log | 2.6667 | 8 | 0.5111 | 0.2470 | 0.5111 | 0.7149 |
| No log | 3.3333 | 10 | 0.5326 | 0.1544 | 0.5326 | 0.7298 |
| No log | 4.0 | 12 | 0.5032 | 0.1842 | 0.5032 | 0.7094 |
| No log | 4.6667 | 14 | 0.5492 | 0.1597 | 0.5492 | 0.7411 |
| No log | 5.3333 | 16 | 0.5470 | 0.0653 | 0.5470 | 0.7396 |
| No log | 6.0 | 18 | 0.5204 | 0.2405 | 0.5204 | 0.7214 |
| No log | 6.6667 | 20 | 0.6030 | 0.3109 | 0.6030 | 0.7765 |
| No log | 7.3333 | 22 | 0.5992 | 0.3416 | 0.5992 | 0.7741 |
| No log | 8.0 | 24 | 0.9672 | 0.1132 | 0.9672 | 0.9834 |
| No log | 8.6667 | 26 | 0.7700 | 0.2313 | 0.7700 | 0.8775 |
| No log | 9.3333 | 28 | 0.8126 | 0.2709 | 0.8126 | 0.9015 |
| No log | 10.0 | 30 | 0.6651 | 0.2674 | 0.6651 | 0.8156 |
| No log | 10.6667 | 32 | 0.9214 | 0.1527 | 0.9214 | 0.9599 |
| No log | 11.3333 | 34 | 0.8128 | 0.1633 | 0.8128 | 0.9016 |
| No log | 12.0 | 36 | 0.6493 | 0.2222 | 0.6493 | 0.8058 |
| No log | 12.6667 | 38 | 0.6881 | 0.2212 | 0.6881 | 0.8295 |
| No log | 13.3333 | 40 | 0.6895 | 0.2198 | 0.6895 | 0.8304 |
| No log | 14.0 | 42 | 0.7659 | 0.3136 | 0.7659 | 0.8752 |
| No log | 14.6667 | 44 | 0.7675 | 0.3136 | 0.7675 | 0.8761 |
| No log | 15.3333 | 46 | 0.7291 | 0.1964 | 0.7291 | 0.8539 |
| No log | 16.0 | 48 | 0.7371 | 0.1918 | 0.7371 | 0.8585 |
| No log | 16.6667 | 50 | 0.6185 | 0.2923 | 0.6185 | 0.7865 |
| No log | 17.3333 | 52 | 0.6598 | 0.2093 | 0.6598 | 0.8123 |
| No log | 18.0 | 54 | 0.5984 | 0.1029 | 0.5984 | 0.7736 |
| No log | 18.6667 | 56 | 0.7167 | 0.1423 | 0.7167 | 0.8466 |
| No log | 19.3333 | 58 | 0.6259 | 0.2017 | 0.6259 | 0.7911 |
| No log | 20.0 | 60 | 0.6114 | 0.3149 | 0.6114 | 0.7819 |
| No log | 20.6667 | 62 | 0.6216 | 0.2795 | 0.6216 | 0.7884 |
| No log | 21.3333 | 64 | 0.7252 | 0.1365 | 0.7252 | 0.8516 |
| No log | 22.0 | 66 | 0.8477 | 0.1620 | 0.8477 | 0.9207 |
| No log | 22.6667 | 68 | 0.6088 | 0.3655 | 0.6088 | 0.7802 |
| No log | 23.3333 | 70 | 0.6748 | 0.2151 | 0.6748 | 0.8215 |
| No log | 24.0 | 72 | 0.5741 | 0.3327 | 0.5741 | 0.7577 |
| No log | 24.6667 | 74 | 0.6138 | 0.1870 | 0.6138 | 0.7834 |
| No log | 25.3333 | 76 | 0.6094 | 0.2450 | 0.6094 | 0.7806 |
| No log | 26.0 | 78 | 0.5562 | 0.3521 | 0.5562 | 0.7458 |
| No log | 26.6667 | 80 | 0.5740 | 0.3841 | 0.5740 | 0.7576 |
| No log | 27.3333 | 82 | 0.7036 | 0.1747 | 0.7036 | 0.8388 |
| No log | 28.0 | 84 | 0.6490 | 0.2295 | 0.6490 | 0.8056 |
| No log | 28.6667 | 86 | 0.6288 | 0.1889 | 0.6288 | 0.7930 |
| No log | 29.3333 | 88 | 0.6417 | 0.1940 | 0.6417 | 0.8011 |
| No log | 30.0 | 90 | 0.6549 | 0.1904 | 0.6549 | 0.8092 |
| No log | 30.6667 | 92 | 0.5789 | 0.2051 | 0.5789 | 0.7609 |
| No log | 31.3333 | 94 | 0.5853 | 0.1934 | 0.5853 | 0.7651 |
| No log | 32.0 | 96 | 0.5992 | 0.3107 | 0.5992 | 0.7741 |
| No log | 32.6667 | 98 | 0.9428 | 0.1885 | 0.9428 | 0.9710 |
| No log | 33.3333 | 100 | 1.1201 | 0.1443 | 1.1201 | 1.0583 |
| No log | 34.0 | 102 | 0.7434 | 0.2101 | 0.7434 | 0.8622 |
| No log | 34.6667 | 104 | 0.6690 | 0.3275 | 0.6690 | 0.8179 |
| No log | 35.3333 | 106 | 0.6602 | 0.3868 | 0.6602 | 0.8125 |
| No log | 36.0 | 108 | 0.5896 | 0.4023 | 0.5896 | 0.7679 |
| No log | 36.6667 | 110 | 0.6188 | 0.2697 | 0.6188 | 0.7867 |
| No log | 37.3333 | 112 | 0.5646 | 0.4023 | 0.5646 | 0.7514 |
| No log | 38.0 | 114 | 0.5554 | 0.3543 | 0.5554 | 0.7453 |
| No log | 38.6667 | 116 | 0.6494 | 0.2928 | 0.6494 | 0.8059 |
| No log | 39.3333 | 118 | 0.5946 | 0.2872 | 0.5946 | 0.7711 |
| No log | 40.0 | 120 | 0.5958 | 0.2847 | 0.5958 | 0.7719 |
| No log | 40.6667 | 122 | 0.5830 | 0.2805 | 0.5830 | 0.7636 |
| No log | 41.3333 | 124 | 0.5896 | 0.3256 | 0.5896 | 0.7678 |
| No log | 42.0 | 126 | 0.6523 | 0.2354 | 0.6523 | 0.8076 |
| No log | 42.6667 | 128 | 0.6406 | 0.2922 | 0.6406 | 0.8004 |
| No log | 43.3333 | 130 | 0.5813 | 0.2850 | 0.5813 | 0.7624 |
| No log | 44.0 | 132 | 0.6464 | 0.1943 | 0.6464 | 0.8040 |
| No log | 44.6667 | 134 | 0.5808 | 0.3519 | 0.5808 | 0.7621 |
| No log | 45.3333 | 136 | 0.5392 | 0.2736 | 0.5392 | 0.7343 |
| No log | 46.0 | 138 | 0.5365 | 0.2250 | 0.5365 | 0.7325 |
| No log | 46.6667 | 140 | 0.5390 | 0.0553 | 0.5390 | 0.7342 |
| No log | 47.3333 | 142 | 0.5755 | 0.1019 | 0.5755 | 0.7586 |
| No log | 48.0 | 144 | 0.5715 | 0.0982 | 0.5715 | 0.7560 |
| No log | 48.6667 | 146 | 0.5471 | 0.2640 | 0.5471 | 0.7397 |
| No log | 49.3333 | 148 | 0.5900 | 0.2985 | 0.5900 | 0.7681 |
| No log | 50.0 | 150 | 0.5944 | 0.3380 | 0.5944 | 0.7710 |
| No log | 50.6667 | 152 | 0.5782 | 0.3202 | 0.5782 | 0.7604 |
| No log | 51.3333 | 154 | 0.5748 | 0.3543 | 0.5748 | 0.7582 |
| No log | 52.0 | 156 | 0.6213 | 0.1585 | 0.6213 | 0.7882 |
| No log | 52.6667 | 158 | 0.5877 | 0.2748 | 0.5877 | 0.7666 |
| No log | 53.3333 | 160 | 0.5677 | 0.2423 | 0.5677 | 0.7535 |
| No log | 54.0 | 162 | 0.5553 | 0.2564 | 0.5553 | 0.7452 |
| No log | 54.6667 | 164 | 0.5583 | 0.1520 | 0.5583 | 0.7472 |
| No log | 55.3333 | 166 | 0.6110 | 0.0793 | 0.6110 | 0.7817 |
| No log | 56.0 | 168 | 0.6696 | 0.2138 | 0.6696 | 0.8183 |
| No log | 56.6667 | 170 | 0.6429 | 0.1148 | 0.6429 | 0.8018 |
| No log | 57.3333 | 172 | 0.5756 | 0.3696 | 0.5756 | 0.7587 |
| No log | 58.0 | 174 | 0.6037 | 0.3052 | 0.6037 | 0.7770 |
| No log | 58.6667 | 176 | 0.6507 | 0.2999 | 0.6507 | 0.8067 |
| No log | 59.3333 | 178 | 0.5970 | 0.3052 | 0.5970 | 0.7727 |
| No log | 60.0 | 180 | 0.5701 | 0.2956 | 0.5701 | 0.7550 |
| No log | 60.6667 | 182 | 0.5676 | 0.2524 | 0.5676 | 0.7534 |
| No log | 61.3333 | 184 | 0.5744 | 0.3092 | 0.5744 | 0.7579 |
| No log | 62.0 | 186 | 0.5831 | 0.3007 | 0.5831 | 0.7636 |
| No log | 62.6667 | 188 | 0.5900 | 0.2540 | 0.5900 | 0.7681 |
| No log | 63.3333 | 190 | 0.5928 | 0.3007 | 0.5928 | 0.7700 |
| No log | 64.0 | 192 | 0.5925 | 0.1570 | 0.5925 | 0.7697 |
| No log | 64.6667 | 194 | 0.5855 | 0.1061 | 0.5855 | 0.7652 |
| No log | 65.3333 | 196 | 0.5739 | 0.1101 | 0.5739 | 0.7575 |
| No log | 66.0 | 198 | 0.5610 | 0.1622 | 0.5610 | 0.7490 |
| No log | 66.6667 | 200 | 0.5536 | 0.1622 | 0.5536 | 0.7440 |
| No log | 67.3333 | 202 | 0.5490 | 0.1622 | 0.5490 | 0.7409 |
| No log | 68.0 | 204 | 0.5494 | 0.2564 | 0.5494 | 0.7412 |
| No log | 68.6667 | 206 | 0.5558 | 0.2821 | 0.5558 | 0.7455 |
| No log | 69.3333 | 208 | 0.5531 | 0.2220 | 0.5531 | 0.7437 |
| No log | 70.0 | 210 | 0.5397 | 0.3415 | 0.5397 | 0.7347 |
| No log | 70.6667 | 212 | 0.5334 | 0.3915 | 0.5334 | 0.7303 |
| No log | 71.3333 | 214 | 0.5390 | 0.3521 | 0.5390 | 0.7341 |
| No log | 72.0 | 216 | 0.5409 | 0.3521 | 0.5409 | 0.7355 |
| No log | 72.6667 | 218 | 0.5429 | 0.3521 | 0.5429 | 0.7368 |
| No log | 73.3333 | 220 | 0.5443 | 0.3521 | 0.5443 | 0.7378 |
| No log | 74.0 | 222 | 0.5499 | 0.2492 | 0.5499 | 0.7415 |
| No log | 74.6667 | 224 | 0.5622 | 0.2034 | 0.5622 | 0.7498 |
| No log | 75.3333 | 226 | 0.5809 | 0.2806 | 0.5809 | 0.7622 |
| No log | 76.0 | 228 | 0.5955 | 0.2205 | 0.5955 | 0.7717 |
| No log | 76.6667 | 230 | 0.5906 | 0.2205 | 0.5906 | 0.7685 |
| No log | 77.3333 | 232 | 0.5853 | 0.2806 | 0.5853 | 0.7651 |
| No log | 78.0 | 234 | 0.5749 | 0.2978 | 0.5749 | 0.7582 |
| No log | 78.6667 | 236 | 0.5635 | 0.2927 | 0.5635 | 0.7507 |
| No log | 79.3333 | 238 | 0.5759 | 0.2540 | 0.5759 | 0.7589 |
| No log | 80.0 | 240 | 0.6189 | 0.3052 | 0.6189 | 0.7867 |
| No log | 80.6667 | 242 | 0.6669 | 0.2450 | 0.6669 | 0.8166 |
| No log | 81.3333 | 244 | 0.6621 | 0.2450 | 0.6621 | 0.8137 |
| No log | 82.0 | 246 | 0.6159 | 0.3052 | 0.6159 | 0.7848 |
| No log | 82.6667 | 248 | 0.5722 | 0.2612 | 0.5722 | 0.7564 |
| No log | 83.3333 | 250 | 0.5602 | 0.2114 | 0.5602 | 0.7485 |
| No log | 84.0 | 252 | 0.5704 | 0.2034 | 0.5704 | 0.7552 |
| No log | 84.6667 | 254 | 0.5751 | 0.2034 | 0.5751 | 0.7583 |
| No log | 85.3333 | 256 | 0.5732 | 0.2034 | 0.5732 | 0.7571 |
| No log | 86.0 | 258 | 0.5750 | 0.2993 | 0.5750 | 0.7583 |
| No log | 86.6667 | 260 | 0.5745 | 0.3816 | 0.5745 | 0.7580 |
| No log | 87.3333 | 262 | 0.5710 | 0.3816 | 0.5710 | 0.7556 |
| No log | 88.0 | 264 | 0.5743 | 0.2956 | 0.5743 | 0.7578 |
| No log | 88.6667 | 266 | 0.5887 | 0.3052 | 0.5887 | 0.7673 |
| No log | 89.3333 | 268 | 0.5982 | 0.3052 | 0.5982 | 0.7734 |
| No log | 90.0 | 270 | 0.5983 | 0.3052 | 0.5983 | 0.7735 |
| No log | 90.6667 | 272 | 0.5956 | 0.3052 | 0.5956 | 0.7718 |
| No log | 91.3333 | 274 | 0.5864 | 0.3052 | 0.5864 | 0.7658 |
| No log | 92.0 | 276 | 0.5801 | 0.3052 | 0.5801 | 0.7617 |
| No log | 92.6667 | 278 | 0.5776 | 0.3052 | 0.5776 | 0.7600 |
| No log | 93.3333 | 280 | 0.5798 | 0.3052 | 0.5798 | 0.7614 |
| No log | 94.0 | 282 | 0.5814 | 0.3052 | 0.5814 | 0.7625 |
| No log | 94.6667 | 284 | 0.5811 | 0.2971 | 0.5811 | 0.7623 |
| No log | 95.3333 | 286 | 0.5799 | 0.3442 | 0.5799 | 0.7615 |
| No log | 96.0 | 288 | 0.5803 | 0.3442 | 0.5803 | 0.7618 |
| No log | 96.6667 | 290 | 0.5803 | 0.3442 | 0.5803 | 0.7618 |
| No log | 97.3333 | 292 | 0.5803 | 0.3828 | 0.5803 | 0.7618 |
| No log | 98.0 | 294 | 0.5800 | 0.3828 | 0.5800 | 0.7616 |
| No log | 98.6667 | 296 | 0.5799 | 0.3828 | 0.5799 | 0.7615 |
| No log | 99.3333 | 298 | 0.5797 | 0.3828 | 0.5797 | 0.7614 |
| No log | 100.0 | 300 | 0.5796 | 0.3828 | 0.5796 | 0.7613 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
tensorblock/sn29_dec_05-GGUF | tensorblock | "2025-01-01T23:03:05Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:mci29/sn29_dec_05",
"base_model:quantized:mci29/sn29_dec_05",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-01T22:16:50Z" | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: mci29/sn29_dec_05
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mci29/sn29_dec_05 - GGUF
This repo contains GGUF format model files for [mci29/sn29_dec_05](https://huggingface.co/mci29/sn29_dec_05).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [sn29_dec_05-Q2_K.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q2_K.gguf) | Q2_K | 3.778 GB | smallest, significant quality loss - not recommended for most purposes |
| [sn29_dec_05-Q3_K_S.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q3_K_S.gguf) | Q3_K_S | 4.335 GB | very small, high quality loss |
| [sn29_dec_05-Q3_K_M.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q3_K_M.gguf) | Q3_K_M | 4.712 GB | very small, high quality loss |
| [sn29_dec_05-Q3_K_L.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q3_K_L.gguf) | Q3_K_L | 4.929 GB | small, substantial quality loss |
| [sn29_dec_05-Q4_0.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q4_0.gguf) | Q4_0 | 5.169 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sn29_dec_05-Q4_K_S.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q4_K_S.gguf) | Q4_K_S | 5.473 GB | small, greater quality loss |
| [sn29_dec_05-Q4_K_M.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q4_K_M.gguf) | Q4_K_M | 5.875 GB | medium, balanced quality - recommended |
| [sn29_dec_05-Q5_0.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q5_0.gguf) | Q5_0 | 6.242 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sn29_dec_05-Q5_K_S.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q5_K_S.gguf) | Q5_K_S | 6.386 GB | large, low quality loss - recommended |
| [sn29_dec_05-Q5_K_M.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q5_K_M.gguf) | Q5_K_M | 6.729 GB | large, very low quality loss - recommended |
| [sn29_dec_05-Q6_K.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q6_K.gguf) | Q6_K | 7.939 GB | very large, extremely low quality loss |
| [sn29_dec_05-Q8_0.gguf](https://huggingface.co/tensorblock/sn29_dec_05-GGUF/blob/main/sn29_dec_05-Q8_0.gguf) | Q8_0 | 9.559 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/sn29_dec_05-GGUF --include "sn29_dec_05-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/sn29_dec_05-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
denise227/dif_metric_amazon_kindle_sentiment_analysis | denise227 | "2024-04-14T15:05:50Z" | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-14T14:10:34Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dif_metric_amazon_kindle_sentiment_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dif_metric_amazon_kindle_sentiment_analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2119
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4993 | 0.02 | 10 | 1.0476 | 0.5808 |
| 0.5086 | 0.03 | 20 | 1.0657 | 0.5808 |
| 0.4702 | 0.05 | 30 | 1.0823 | 0.58 |
| 0.5238 | 0.07 | 40 | 1.1426 | 0.5608 |
| 0.4799 | 0.08 | 50 | 1.0619 | 0.5858 |
| 0.4808 | 0.1 | 60 | 1.1120 | 0.585 |
| 0.463 | 0.12 | 70 | 1.0674 | 0.5758 |
| 0.3872 | 0.13 | 80 | 1.1671 | 0.5792 |
| 0.4644 | 0.15 | 90 | 1.1096 | 0.575 |
| 0.4499 | 0.17 | 100 | 1.1601 | 0.575 |
| 0.3699 | 0.18 | 110 | 1.2459 | 0.565 |
| 0.4226 | 0.2 | 120 | 1.2242 | 0.57 |
| 0.6827 | 0.22 | 130 | 1.1437 | 0.59 |
| 0.633 | 0.23 | 140 | 1.3094 | 0.5467 |
| 0.4759 | 0.25 | 150 | 1.4381 | 0.5217 |
| 0.6485 | 0.27 | 160 | 1.1986 | 0.5542 |
| 0.4229 | 0.28 | 170 | 1.2395 | 0.5642 |
| 0.3618 | 0.3 | 180 | 1.2444 | 0.5567 |
| 0.5288 | 0.32 | 190 | 1.2414 | 0.5733 |
| 0.3946 | 0.33 | 200 | 1.2790 | 0.5675 |
| 0.3213 | 0.35 | 210 | 1.2749 | 0.57 |
| 0.4933 | 0.37 | 220 | 1.2454 | 0.5717 |
| 0.3368 | 0.38 | 230 | 1.3095 | 0.5625 |
| 0.428 | 0.4 | 240 | 1.2191 | 0.5675 |
| 0.3233 | 0.42 | 250 | 1.2650 | 0.5767 |
| 0.4256 | 0.43 | 260 | 1.2431 | 0.5875 |
| 0.505 | 0.45 | 270 | 1.2852 | 0.575 |
| 0.3405 | 0.47 | 280 | 1.2753 | 0.58 |
| 0.2997 | 0.48 | 290 | 1.2355 | 0.5833 |
| 0.3166 | 0.5 | 300 | 1.2394 | 0.5833 |
| 0.3945 | 0.52 | 310 | 1.2072 | 0.585 |
| 0.3944 | 0.53 | 320 | 1.2370 | 0.5833 |
| 0.4107 | 0.55 | 330 | 1.2283 | 0.5808 |
| 0.2498 | 0.57 | 340 | 1.2782 | 0.5775 |
| 0.3608 | 0.58 | 350 | 1.4234 | 0.565 |
| 0.4436 | 0.6 | 360 | 1.2708 | 0.5875 |
| 0.3745 | 0.62 | 370 | 1.2782 | 0.5783 |
| 0.3468 | 0.63 | 380 | 1.3724 | 0.5725 |
| 0.3561 | 0.65 | 390 | 1.2847 | 0.5883 |
| 0.3938 | 0.67 | 400 | 1.3866 | 0.5658 |
| 0.436 | 0.68 | 410 | 1.3286 | 0.5842 |
| 0.2518 | 0.7 | 420 | 1.3983 | 0.57 |
| 0.3923 | 0.72 | 430 | 1.4077 | 0.5575 |
| 0.368 | 0.73 | 440 | 1.3139 | 0.5775 |
| 0.4665 | 0.75 | 450 | 1.3430 | 0.5817 |
| 0.2811 | 0.77 | 460 | 1.3220 | 0.5858 |
| 0.2383 | 0.78 | 470 | 1.3196 | 0.5908 |
| 0.3638 | 0.8 | 480 | 1.3141 | 0.59 |
| 0.4298 | 0.82 | 490 | 1.3613 | 0.5867 |
| 0.3621 | 0.83 | 500 | 1.5250 | 0.5267 |
| 0.3613 | 0.85 | 510 | 1.3598 | 0.5842 |
| 0.3825 | 0.87 | 520 | 1.4080 | 0.5692 |
| 0.4243 | 0.88 | 530 | 1.3791 | 0.5875 |
| 0.4869 | 0.9 | 540 | 1.3808 | 0.5933 |
| 0.7625 | 0.92 | 550 | 1.3123 | 0.585 |
| 0.9794 | 0.93 | 560 | 1.2532 | 0.5925 |
| 0.6095 | 0.95 | 570 | 1.2624 | 0.5858 |
| 0.806 | 0.97 | 580 | 1.2222 | 0.5733 |
| 0.7479 | 0.98 | 590 | 1.1705 | 0.5767 |
| 0.6744 | 1.0 | 600 | 1.1430 | 0.5775 |
| 0.3205 | 1.02 | 610 | 1.1521 | 0.5808 |
| 0.4012 | 1.03 | 620 | 1.1715 | 0.6 |
| 0.4163 | 1.05 | 630 | 1.2237 | 0.5917 |
| 0.3824 | 1.07 | 640 | 1.2230 | 0.6008 |
| 0.4983 | 1.08 | 650 | 1.2200 | 0.5933 |
| 0.3678 | 1.1 | 660 | 1.2375 | 0.5908 |
| 0.4142 | 1.12 | 670 | 1.2434 | 0.5917 |
| 0.3852 | 1.13 | 680 | 1.2189 | 0.6075 |
| 0.3486 | 1.15 | 690 | 1.2383 | 0.595 |
| 0.436 | 1.17 | 700 | 1.2367 | 0.5933 |
| 0.3755 | 1.18 | 710 | 1.2292 | 0.5983 |
| 0.3124 | 1.2 | 720 | 1.2289 | 0.5983 |
| 0.4066 | 1.22 | 730 | 1.2193 | 0.5958 |
| 0.3882 | 1.23 | 740 | 1.2229 | 0.6025 |
| 0.4264 | 1.25 | 750 | 1.2066 | 0.6 |
| 0.382 | 1.27 | 760 | 1.2401 | 0.5867 |
| 0.4083 | 1.28 | 770 | 1.2310 | 0.5925 |
| 0.4244 | 1.3 | 780 | 1.2325 | 0.5942 |
| 0.3663 | 1.32 | 790 | 1.2371 | 0.59 |
| 0.3024 | 1.33 | 800 | 1.2646 | 0.5833 |
| 0.4253 | 1.35 | 810 | 1.2577 | 0.585 |
| 0.3527 | 1.37 | 820 | 1.2507 | 0.5867 |
| 0.354 | 1.38 | 830 | 1.2679 | 0.585 |
| 0.3585 | 1.4 | 840 | 1.3151 | 0.5942 |
| 0.4061 | 1.42 | 850 | 1.2708 | 0.5858 |
| 0.3498 | 1.43 | 860 | 1.2619 | 0.5892 |
| 0.4252 | 1.45 | 870 | 1.2518 | 0.5858 |
| 0.4091 | 1.47 | 880 | 1.2544 | 0.5742 |
| 0.2835 | 1.48 | 890 | 1.2472 | 0.585 |
| 0.3067 | 1.5 | 900 | 1.2494 | 0.59 |
| 0.4689 | 1.52 | 910 | 1.2516 | 0.59 |
| 0.5256 | 1.53 | 920 | 1.2403 | 0.5942 |
| 0.4434 | 1.55 | 930 | 1.2403 | 0.5933 |
| 0.4597 | 1.57 | 940 | 1.2355 | 0.5867 |
| 0.4791 | 1.58 | 950 | 1.2302 | 0.5992 |
| 0.3919 | 1.6 | 960 | 1.2243 | 0.5967 |
| 0.408 | 1.62 | 970 | 1.2207 | 0.5983 |
| 0.3767 | 1.63 | 980 | 1.2207 | 0.5942 |
| 0.4726 | 1.65 | 990 | 1.2226 | 0.5867 |
| 0.4926 | 1.67 | 1000 | 1.2121 | 0.5992 |
| 0.4534 | 1.68 | 1010 | 1.2118 | 0.5975 |
| 0.3774 | 1.7 | 1020 | 1.2113 | 0.5992 |
| 0.4451 | 1.72 | 1030 | 1.2110 | 0.6033 |
| 0.2872 | 1.73 | 1040 | 1.2124 | 0.6042 |
| 0.4425 | 1.75 | 1050 | 1.2145 | 0.5983 |
| 0.4125 | 1.77 | 1060 | 1.2175 | 0.5983 |
| 0.3929 | 1.78 | 1070 | 1.2183 | 0.6075 |
| 0.5153 | 1.8 | 1080 | 1.2177 | 0.6058 |
| 0.4643 | 1.82 | 1090 | 1.2179 | 0.6033 |
| 0.4116 | 1.83 | 1100 | 1.2192 | 0.6033 |
| 0.3701 | 1.85 | 1110 | 1.2198 | 0.6025 |
| 0.73 | 1.87 | 1120 | 1.2195 | 0.5983 |
| 0.4559 | 1.88 | 1130 | 1.2172 | 0.6 |
| 0.5021 | 1.9 | 1140 | 1.2151 | 0.5975 |
| 0.54 | 1.92 | 1150 | 1.2136 | 0.5958 |
| 0.5492 | 1.93 | 1160 | 1.2136 | 0.5967 |
| 0.524 | 1.95 | 1170 | 1.2132 | 0.6 |
| 0.4983 | 1.97 | 1180 | 1.2127 | 0.5975 |
| 0.6899 | 1.98 | 1190 | 1.2121 | 0.6 |
| 0.4692 | 2.0 | 1200 | 1.2119 | 0.6 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
liuyt75/t5-large_prefix_tuning_sentences_allagree_10 | liuyt75 | "2023-07-26T15:20:55Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-26T13:12:24Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
TeTLAB/zephyr-7b-beta_assistant_v1_merged | TeTLAB | "2024-06-14T19:15:59Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T18:32:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
goodasdgood/Mistral-Nemo-Base-2407-Q4_K_M-GGUF | goodasdgood | "2024-08-20T02:31:28Z" | 22 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-08-20T02:30:55Z" | ---
base_model: mistralai/Mistral-Nemo-Base-2407
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# goodasdgood/Mistral-Nemo-Base-2407-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-Nemo-Base-2407`](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo goodasdgood/Mistral-Nemo-Base-2407-Q4_K_M-GGUF --hf-file mistral-nemo-base-2407-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo goodasdgood/Mistral-Nemo-Base-2407-Q4_K_M-GGUF --hf-file mistral-nemo-base-2407-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo goodasdgood/Mistral-Nemo-Base-2407-Q4_K_M-GGUF --hf-file mistral-nemo-base-2407-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo goodasdgood/Mistral-Nemo-Base-2407-Q4_K_M-GGUF --hf-file mistral-nemo-base-2407-q4_k_m.gguf -c 2048
```
|
LoneStriker/Noromaid-v0.1-mixtral-8x7b-v3-5.0bpw-h6-exl2 | LoneStriker | "2023-12-24T21:33:28Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mixtral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-24T20:42:36Z" | ---
license: cc-by-nc-4.0
---

---
# Disclaimer:
## This model is experimental, do not expect everything to work.
You need to use our custom **prompting format**(scroll down to see them! or just directly download the SillyTavern preset [here](https://files.catbox.moe/0ohmco.json))
---
Beeg noromaid. Suitable for RP, ERP.
This model was trained for 8h(v1) + 8h(v2) + 12h(v3) on customized modified datasets, focusing on RP, uncensoring, and a modified version of the Alpaca prompting (that was already used in LimaRP), which should be at the same conversational level as ChatLM or Llama2-Chat without adding any additional special tokens.
If you wanna have more infos about this model(and v1 + v2) you can check out [my blog post](https://ikaridevgit.github.io/index.html?p=7&blog=blogid-6&bo=true)
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
## Credits:
- Undi
- IkariDev
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Noromaid-v0.1-mixtral-8x7b-v3.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Datasets used:
- Aesir 1 and 2 ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe))
- [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia))
- [ToxicDPO-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) ([unalignment orga repo](https://huggingface.co/unalignment) + [Undi](https://huggingface.co/Undi95))
- [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgu))
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
PrunaAI/rinna-youri-7b-chat-QUANTO-int2bit-smashed | PrunaAI | "2024-08-02T16:02:32Z" | 3 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:rinna/youri-7b-chat",
"base_model:finetune:rinna/youri-7b-chat",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T21:01:01Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: rinna/youri-7b-chat
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo rinna/youri-7b-chat installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/rinna-youri-7b-chat-QUANTO-int2bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-chat")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model rinna/youri-7b-chat before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
AntonioTH/Layout-finetuned-fr-model-50instances20-100epochs-5e-05lr-GPU | AntonioTH | "2025-01-20T11:15:45Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutxlm-base",
"base_model:finetune:microsoft/layoutxlm-base",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | document-question-answering | "2025-01-20T11:00:08Z" | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutxlm-base
tags:
- generated_from_trainer
model-index:
- name: Layout-finetuned-fr-model-50instances20-100epochs-5e-05lr-GPU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Layout-finetuned-fr-model-50instances20-100epochs-5e-05lr-GPU
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0000
- eval_accuracy: 1.0
- eval_learning_rate: 5e-05
- eval_runtime: 2.1747
- eval_samples_per_second: 9.197
- eval_steps_per_second: 1.38
- epoch: 47.6923
- step: 620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: reduce_lr_on_plateau
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 100
### Framework versions
- Transformers 4.48.0
- Pytorch 2.3.1.post300
- Datasets 3.2.0
- Tokenizers 0.21.0
|
great0001/ca813d5a-e762-43eb-8edc-ae0af9f36a6a | great0001 | "2025-01-17T09:59:15Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | "2025-01-17T09:58:00Z" | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ca813d5a-e762-43eb-8edc-ae0af9f36a6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 26e10714252c0f72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/26e10714252c0f72_train_data.json
type:
field_instruction: fr
field_output: ar
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/ca813d5a-e762-43eb-8edc-ae0af9f36a6a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/26e10714252c0f72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9b79bb1e-e20b-41f0-bdfd-5c5e821b156b
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9b79bb1e-e20b-41f0-bdfd-5c5e821b156b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ca813d5a-e762-43eb-8edc-ae0af9f36a6a
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9217 | 0.0014 | 1 | 1.9107 |
| 2.0067 | 0.0042 | 3 | 1.9047 |
| 2.0335 | 0.0083 | 6 | 1.8162 |
| 1.4651 | 0.0125 | 9 | 1.5161 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/meditron-7b-llm-radiology-i1-GGUF | mradermacher | "2025-01-26T09:51:21Z" | 267 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nitinaggarwal12/meditron-7b-llm-radiology",
"base_model:quantized:nitinaggarwal12/meditron-7b-llm-radiology",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-12-26T10:10:33Z" | ---
base_model: nitinaggarwal12/meditron-7b-llm-radiology
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nitinaggarwal12/meditron-7b-llm-radiology
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/meditron-7b-llm-radiology-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/meditron-7b-llm-radiology-i1-GGUF/resolve/main/meditron-7b-llm-radiology.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits