modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-16 00:39:17
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 427
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-16 00:38:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
safecantonese/whisper-small-yue-full-1 | safecantonese | "2024-02-09T10:40:24Z" | 63 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:safecantonese/whisper-small-yue-full",
"base_model:finetune:safecantonese/whisper-small-yue-full",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-02-09T10:38:47Z" | ---
tags:
- generated_from_trainer
base_model: safecantonese/whisper-small-yue-full
model-index:
- name: whisper-small-yue-full-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-yue-full-1
This model is a fine-tuned version of [safecantonese/whisper-small-yue-full](https://huggingface.co/safecantonese/whisper-small-yue-full) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LoicSteve/rl_course_vizdoom_health_gathering_supreme | LoicSteve | "2024-01-08T14:33:21Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-08T14:32:50Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.85 +/- 5.24
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r LoicSteve/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Larxel/a2c-AntBulletEnv-v0 | Larxel | "2023-04-16T08:29:00Z" | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-16T08:27:50Z" | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1416.13 +/- 523.94
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Seif/Reinforce-Reinforce-CartPole-v1 | Seif | "2023-03-30T19:05:20Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-30T19:05:11Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
abhraskygod/cnn_news_summary_model | abhraskygod | "2023-04-08T12:55:47Z" | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-06T12:02:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: cnn_news_summary_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_news_summary_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0183
- Rouge1: 0.2163
- Rouge2: 0.083
- Rougel: 0.1761
- Rougelsum: 0.1761
- Gen Len: 18.9443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2274 | 1.0 | 1500 | 2.0314 | 0.2154 | 0.0823 | 0.1747 | 0.1747 | 18.9512 |
| 2.2057 | 2.0 | 3000 | 2.0183 | 0.2163 | 0.083 | 0.1761 | 0.1761 | 18.9443 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mergekit-community/mergekit-slerp-hayztti | mergekit-community | "2024-12-04T01:45:58Z" | 13 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DavidAU/L3.1-RP-Hero-Dirty_Harry-8B",
"base_model:merge:DavidAU/L3.1-RP-Hero-Dirty_Harry-8B",
"base_model:ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B",
"base_model:merge:ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-04T01:40:02Z" | ---
base_model:
- ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B
- DavidAU/L3.1-RP-Hero-Dirty_Harry-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B](https://huggingface.co/ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B)
* [DavidAU/L3.1-RP-Hero-Dirty_Harry-8B](https://huggingface.co/DavidAU/L3.1-RP-Hero-Dirty_Harry-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: DavidAU/L3.1-RP-Hero-Dirty_Harry-8B
layer_range: [0, 32]
- model: ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B
layer_range: [0, 32]
merge_method: slerp
base_model: ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
dmargutierrez/distilbert-base-multilingual-cased-WNUT-ner | dmargutierrez | "2023-03-15T10:16:23Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-03-15T10:09:19Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-WNUT-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5496503496503496
- name: Recall
type: recall
value: 0.36422613531047265
- name: F1
type: f1
value: 0.4381270903010034
- name: Accuracy
type: accuracy
value: 0.9468667179618706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-WNUT-ner
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3516
- Precision: 0.5497
- Recall: 0.3642
- F1: 0.4381
- Accuracy: 0.9469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2727 | 0.6626 | 0.2530 | 0.3662 | 0.9402 |
| No log | 2.0 | 426 | 0.2636 | 0.5895 | 0.2715 | 0.3718 | 0.9429 |
| 0.1729 | 3.0 | 639 | 0.2933 | 0.5931 | 0.3040 | 0.4020 | 0.9447 |
| 0.1729 | 4.0 | 852 | 0.2861 | 0.5437 | 0.3457 | 0.4227 | 0.9453 |
| 0.0503 | 5.0 | 1065 | 0.3270 | 0.5627 | 0.3494 | 0.4311 | 0.9455 |
| 0.0503 | 6.0 | 1278 | 0.3277 | 0.5451 | 0.3531 | 0.4286 | 0.9463 |
| 0.0503 | 7.0 | 1491 | 0.3471 | 0.5828 | 0.3457 | 0.4340 | 0.9467 |
| 0.0231 | 8.0 | 1704 | 0.3594 | 0.5801 | 0.3457 | 0.4332 | 0.9464 |
| 0.0231 | 9.0 | 1917 | 0.3550 | 0.5567 | 0.3503 | 0.4300 | 0.9467 |
| 0.0121 | 10.0 | 2130 | 0.3516 | 0.5497 | 0.3642 | 0.4381 | 0.9469 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
niryuu/Karasu-1.1b-chat-vector | niryuu | "2024-03-24T20:02:05Z" | 141 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ja",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-24T18:23:49Z" | ---
language:
- ja
license: apache-2.0
library_name: transformers
widget:
- example_title: 日本語チャット
messages:
- role: system
content: あなたは日本語のチャットボットです。
- role: user
content: 日本で一番高い山は?
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/9752cb85-6a0f-4384-9e08-7e395f2c00c3 | mrferr3t | "2025-02-03T15:40:48Z" | 17 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | "2025-02-03T14:57:54Z" | ---
library_name: peft
license: other
base_model: facebook/opt-1.3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9752cb85-6a0f-4384-9e08-7e395f2c00c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: facebook/opt-1.3b
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 20fc9edc61053699_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/20fc9edc61053699_train_data.json
type:
field_input: answer
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/9752cb85-6a0f-4384-9e08-7e395f2c00c3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/20fc9edc61053699_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: efadbf9b-21a1-4759-b077-7318afa3023b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: efadbf9b-21a1-4759-b077-7318afa3023b
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9752cb85-6a0f-4384-9e08-7e395f2c00c3
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 31
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0080 | 1 | 1.7673 |
| No log | 0.3187 | 40 | 1.6043 |
| No log | 0.6375 | 80 | 1.5327 |
| 3.2674 | 0.9562 | 120 | 1.4930 |
| 3.2674 | 1.2749 | 160 | 1.4741 |
| 3.0074 | 1.5936 | 200 | 1.4569 |
| 3.0074 | 1.9124 | 240 | 1.4434 |
| 3.0074 | 2.2311 | 280 | 1.4310 |
| 2.8164 | 2.5498 | 320 | 1.4232 |
| 2.8164 | 2.8685 | 360 | 1.4143 |
| 2.7232 | 3.1873 | 400 | 1.4068 |
| 2.7232 | 3.5060 | 440 | 1.4026 |
| 2.7232 | 3.8247 | 480 | 1.3945 |
| 2.6641 | 4.1434 | 520 | 1.3931 |
| 2.6641 | 4.4622 | 560 | 1.3937 |
| 2.5637 | 4.7809 | 600 | 1.3833 |
| 2.5637 | 5.0996 | 640 | 1.3867 |
| 2.5637 | 5.4183 | 680 | 1.3838 |
| 2.4995 | 5.7371 | 720 | 1.3809 |
| 2.4995 | 6.0558 | 760 | 1.3788 |
| 2.4638 | 6.3745 | 800 | 1.3829 |
| 2.4638 | 6.6932 | 840 | 1.3788 |
| 2.4638 | 7.0120 | 880 | 1.3762 |
| 2.4062 | 7.3307 | 920 | 1.3788 |
| 2.4062 | 7.6494 | 960 | 1.3788 |
| 2.3963 | 7.9681 | 1000 | 1.3769 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Epos-8b-Q6_K-GGUF | Triangle104 | "2024-12-01T13:11:24Z" | 9 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:P0x0/Epos-8b",
"base_model:quantized:P0x0/Epos-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-01T13:07:39Z" | ---
library_name: transformers
base_model: P0x0/Epos-8b
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Epos-8b-Q6_K-GGUF
This model was converted to GGUF format from [`P0x0/Epos-8b`](https://huggingface.co/P0x0/Epos-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/P0x0/Epos-8b) for more details on the model.
---
Model details:
-
Epos-8B is a fine-tuned version of the base model Llama-3.1-8B
from Meta, optimized for storytelling, dialogue generation, and
creative writing. The model specializes in generating rich narratives,
immersive prose, and dynamic character interactions, making it ideal for
creative tasks.
Model Details
Model Description
Epos-8B is an 8 billion parameter language model fine-tuned for
storytelling and narrative tasks. Inspired by the grandeur of epic
tales, it is designed to produce high-quality, engaging content that
evokes the depth and imagination of ancient myths and modern
storytelling traditions.
Developed by: P0x0
Funded by: P0x0
Shared by: P0x0
Model type: Transformer-based Language Model
Language(s) (NLP): Primarily English
License: Apache 2.0
Finetuned from model: meta-llama/Llama-3.1-8B
Model Sources
Repository: Epos-8B on Hugging Face
GGUF Repository: Epos-8B-GGUF (TO BE ADDED)
Uses
Direct Use
Epos-8B is ideal for:
Storytelling: Generate detailed, immersive, and engaging narratives.
Dialogue Creation: Create realistic and dynamic character interactions for stories or games.
How to Get Started with the Model
To run the quantized version of the model, you can use KoboldCPP, which allows you to run quantized GGUF models locally.
Steps:
Download KoboldCPP.
Follow the setup instructions provided in the repository.
Download the GGUF variant of Epos-8B from Epos-8B-GGUF.
Load the model in KoboldCPP and start generating!
Alternatively, integrate the model directly into your code with the following snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("P0x0/Epos-8B")
model = AutoModelForCausalLM.from_pretrained("P0x0/Epos-8B")
input_text = "Once upon a time in a distant land..."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Epos-8b-Q6_K-GGUF --hf-file epos-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Epos-8b-Q6_K-GGUF --hf-file epos-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Epos-8b-Q6_K-GGUF --hf-file epos-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Epos-8b-Q6_K-GGUF --hf-file epos-8b-q6_k.gguf -c 2048
```
|
HZDR-FWGEL/UCD-LEVIRCD256-ChangeFormer | HZDR-FWGEL | "2024-11-11T14:30:27Z" | 6 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2024-11-11T14:30:18Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
dimasik87/939424bd-7910-430a-8678-25a1f3724646 | dimasik87 | "2025-02-06T05:16:30Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | "2025-02-06T05:10:40Z" | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 939424bd-7910-430a-8678-25a1f3724646
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 40002f1e28f17b60_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/40002f1e28f17b60_train_data.json
type:
field_input: ''
field_instruction: dialogue
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: null
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/939424bd-7910-430a-8678-25a1f3724646
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/G.O.D/40002f1e28f17b60_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2c9e20dc-494e-4220-b57a-6190a540efd4
wandb_project: cold4
wandb_run: your_name
wandb_runid: 2c9e20dc-494e-4220-b57a-6190a540efd4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 939424bd-7910-430a-8678-25a1f3724646
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.997 | 0.0006 | 1 | 2.3363 |
| 2.3546 | 0.0295 | 50 | 1.2250 |
| 2.7714 | 0.0590 | 100 | 1.1918 |
| 2.4258 | 0.0886 | 150 | 1.1013 |
| 1.7774 | 0.1181 | 200 | 1.0789 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kfujie/kanji-diffusion | kfujie | "2025-02-20T14:20:19Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-02-20T13:19:15Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - kfujie/kanji-diffusion
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B-Q4_K_M-GGUF | sm54 | "2025-03-12T11:48:27Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B",
"base_model:quantized:sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-12T11:46:49Z" | ---
base_model: sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B`](https://huggingface.co/sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B-Q4_K_M-GGUF --hf-file qwq-deepseek-r1-skyt1-flash-lighter-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B-Q4_K_M-GGUF --hf-file qwq-deepseek-r1-skyt1-flash-lighter-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B-Q4_K_M-GGUF --hf-file qwq-deepseek-r1-skyt1-flash-lighter-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sm54/QwQ-DeepSeek-R1-SkyT1-Flash-Lighter-32B-Q4_K_M-GGUF --hf-file qwq-deepseek-r1-skyt1-flash-lighter-32b-q4_k_m.gguf -c 2048
```
|
IronOne-AI-Labs/led-large-annual-report-QLoRA-fine-tuned-v0.9.5-openai-merged | IronOne-AI-Labs | "2024-09-10T06:13:18Z" | 93 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-09-10T06:11:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deadrichard/my-fine-tuned-model | deadrichard | "2025-01-06T12:46:36Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2025-01-06T12:46:34Z" | ---
library_name: peft
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my-fine-tuned-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-fine-tuned-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |
sklearn-docs/anomaly-detection | sklearn-docs | "2023-04-05T12:03:13Z" | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
] | tabular-classification | "2023-04-05T12:01:47Z" | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: isolation_forest.pkl
widget:
structuredData:
x0:
- 1.9137876638235471
- -1.8264435506813366
- -2.1884262678924737
x1:
- 2.021017965584703
- -1.895103662902048
- -2.1443081355382363
---
# Model description
[More Information Needed]
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|----------------------|---------------------------------------------------------------------------------------------|
| memory | |
| steps | [('scaler', StandardScaler()), ('model', IsolationForest(max_samples=100, random_state=0))] |
| verbose | False |
| scaler | StandardScaler() |
| model | IsolationForest(max_samples=100, random_state=0) |
| scaler__copy | True |
| scaler__with_mean | True |
| scaler__with_std | True |
| model__bootstrap | False |
| model__contamination | auto |
| model__max_features | 1.0 |
| model__max_samples | 100 |
| model__n_estimators | 100 |
| model__n_jobs | |
| model__random_state | 0 |
| model__verbose | 0 |
| model__warm_start | False |
</details>
### Model Plot
<style>#sk-container-id-2 {color: black;background-color: white;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('scaler', StandardScaler()),('model', IsolationForest(max_samples=100, random_state=0))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('scaler', StandardScaler()),('model', IsolationForest(max_samples=100, random_state=0))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-6" type="checkbox" ><label for="sk-estimator-id-6" class="sk-toggleable__label sk-toggleable__label-arrow">IsolationForest</label><div class="sk-toggleable__content"><pre>IsolationForest(max_samples=100, random_state=0)</pre></div></div></div></div></div></div></div>
## Evaluation Results
[More Information Needed]
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# Decision Boundary

# Plot paths

|
Mihail-P/ppo-LunarLander-v2_final | Mihail-P | "2023-03-30T04:24:54Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-30T04:24:29Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.01 +/- 15.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/eimi__blue_archive_ | LarryAIDraw | "2023-12-12T17:20:18Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-12T17:18:08Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/230270/yuan-izumimoto-eimi-blue-archive |
igastesi/model_ohwx_filtered | igastesi | "2023-05-16T13:42:39Z" | 7 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:SG161222/Realistic_Vision_V2.0",
"base_model:adapter:SG161222/Realistic_Vision_V2.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-05-16T06:46:42Z" |
---
license: creativeml-openrail-m
base_model: SG161222/Realistic_Vision_V2.0
instance_prompt: a photo of ohwx open window
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - igastesi/model_ohwx_filtered
These are LoRA adaption weights for SG161222/Realistic_Vision_V2.0. The weights were trained on a photo of ohwx open window using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
liuchang8877/qwen2.5omini | liuchang8877 | "2025-03-31T08:33:00Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-31T08:33:00Z" | ---
license: apache-2.0
---
|
kdvtr/plastilineStyle_LoRA | kdvtr | "2025-04-06T22:06:09Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-04-06T22:05:22Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: illustration in PLASTILINE style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - kdvtr/plastilineStyle_LoRA
<Gallery />
## Model description
These are kdvtr/plastilineStyle_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use illustration in PLASTILINE style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](kdvtr/plastilineStyle_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
OumaElha/Speech7 | OumaElha | "2023-06-26T22:06:28Z" | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-06-26T20:25:41Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Speech7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speech7
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.2042 | 1.44 | 100 | 4.2839 | 1 |
| 4.1119 | 2.88 | 200 | 4.2368 | 1 |
| 4.1696 | 4.32 | 300 | 4.2242 | 1 |
| 4.1838 | 5.76 | 400 | 4.2262 | 1 |
| 4.2368 | 7.19 | 500 | 4.2247 | 1 |
| 4.1376 | 8.63 | 600 | 4.2179 | 1 |
| 4.1417 | 10.07 | 700 | 4.2209 | 1 |
| 4.2254 | 11.51 | 800 | 4.2471 | 1 |
| 4.2302 | 12.95 | 900 | 4.2145 | 1 |
| 4.1778 | 14.39 | 1000 | 4.3393 | 1 |
| 4.1574 | 15.83 | 1100 | 4.2917 | 1 |
| 4.2026 | 17.27 | 1200 | 4.2731 | 1 |
| 4.141 | 18.71 | 1300 | 4.2302 | 1 |
| 4.2525 | 20.14 | 1400 | 4.2104 | 1 |
| 4.2325 | 21.58 | 1500 | 4.2543 | 1 |
| 4.1789 | 23.02 | 1600 | 4.4020 | 1 |
| 4.1456 | 24.46 | 1700 | 4.2143 | 1 |
| 4.1754 | 25.9 | 1800 | 4.2123 | 1 |
| 12.3485 | 27.34 | 1900 | 50.3232 | 1 |
| 4.2031 | 28.78 | 2000 | 4.2259 | 1 |
| 4.1497 | 30.22 | 2100 | 4.3216 | 1 |
| 4.2171 | 31.65 | 2200 | 4.2108 | 1 |
| 4.1981 | 33.09 | 2300 | 4.3025 | 1 |
| 4.2091 | 34.53 | 2400 | 4.2173 | 1 |
| 4.2005 | 35.97 | 2500 | 4.2747 | 1 |
| 4.2386 | 37.41 | 2600 | 4.2027 | 1 |
| 4.2343 | 38.85 | 2700 | 4.2137 | 1 |
| 4.0967 | 40.29 | 2800 | 4.2804 | 1 |
| 4.1737 | 41.73 | 2900 | 4.2072 | 1 |
| 4.171 | 43.17 | 3000 | 4.2186 | 1 |
| 4.2117 | 44.6 | 3100 | 4.2161 | 1 |
| 4.1021 | 46.04 | 3200 | 4.2389 | 1 |
| 4.2572 | 47.48 | 3300 | 4.2126 | 1 |
| 3.4461 | 48.92 | 3400 | 4.2700 | 1 |
| 0.7289 | 50.36 | 3500 | 4.2700 | 1 |
| 0.4496 | 51.8 | 3600 | 4.2700 | 1 |
| 0.1189 | 53.24 | 3700 | 4.2700 | 1 |
| 8.233 | 54.68 | 3800 | 4.2700 | 1 |
| 3.8072 | 56.12 | 3900 | 4.2700 | 1 |
| 0.0 | 57.55 | 4000 | nan | 1 |
| 0.0 | 58.99 | 4100 | nan | 1 |
| 0.0 | 60.43 | 4200 | nan | 1 |
| 0.0 | 61.87 | 4300 | nan | 1 |
| 0.0 | 63.31 | 4400 | nan | 1 |
| 0.0 | 64.75 | 4500 | nan | 1 |
| 0.0 | 66.19 | 4600 | nan | 1 |
| 0.0 | 67.63 | 4700 | nan | 1 |
| 0.0 | 69.06 | 4800 | nan | 1 |
| 0.0 | 70.5 | 4900 | nan | 1 |
| 0.0 | 71.94 | 5000 | nan | 1 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
minchul/cvlface_adaface_vit_base_webface4m | minchul | "2024-08-19T20:54:00Z" | 65 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"en",
"arxiv:2010.11929",
"license:mit",
"region:us"
] | feature-extraction | "2024-06-06T13:52:14Z" | ---
language: en
license: mit
arxiv: 2010.11929
---
<div align="center">
<h1>
CVLFace Pretrained Model (ADAFACE VIT BASE WEBFACE4M)
</h1>
</div>
<p align="center">
🌎 <a href="https://github.com/mk-minchul/CVLface" target="_blank">GitHub</a> • 🤗 <a href="https://huggingface.co/minchul" target="_blank">Hugging Face</a>
</p>
-----
## 1. Introduction
Model Name: ADAFACE VIT BASE WEBFACE4M
Related Paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (https://arxiv.org/abs/2010.11929)
Please cite the orignal paper and follow the license of the training dataset.
## 2. Quick Start
```python
from transformers import AutoModel
from huggingface_hub import hf_hub_download
import shutil
import os
import torch
import sys
# helpfer function to download huggingface repo and use model
def download(repo_id, path, HF_TOKEN=None):
os.makedirs(path, exist_ok=True)
files_path = os.path.join(path, 'files.txt')
if not os.path.exists(files_path):
hf_hub_download(repo_id, 'files.txt', token=HF_TOKEN, local_dir=path, local_dir_use_symlinks=False)
with open(os.path.join(path, 'files.txt'), 'r') as f:
files = f.read().split('\n')
for file in [f for f in files if f] + ['config.json', 'wrapper.py', 'model.safetensors']:
full_path = os.path.join(path, file)
if not os.path.exists(full_path):
hf_hub_download(repo_id, file, token=HF_TOKEN, local_dir=path, local_dir_use_symlinks=False)
# helpfer function to download huggingface repo and use model
def load_model_from_local_path(path, HF_TOKEN=None):
cwd = os.getcwd()
os.chdir(path)
sys.path.insert(0, path)
model = AutoModel.from_pretrained(path, trust_remote_code=True, token=HF_TOKEN)
os.chdir(cwd)
sys.path.pop(0)
return model
# helpfer function to download huggingface repo and use model
def load_model_by_repo_id(repo_id, save_path, HF_TOKEN=None, force_download=False):
if force_download:
if os.path.exists(save_path):
shutil.rmtree(save_path)
download(repo_id, save_path, HF_TOKEN)
return load_model_from_local_path(save_path, HF_TOKEN)
if __name__ == '__main__':
HF_TOKEN = 'YOUR_HUGGINGFACE_TOKEN'
path = os.path.expanduser('~/.cvlface_cache/minchul/cvlface_adaface_vit_base_webface4m')
repo_id = 'minchul/cvlface_adaface_vit_base_webface4m'
model = load_model_by_repo_id(repo_id, path, HF_TOKEN)
# input is a rgb image normalized.
from torchvision.transforms import Compose, ToTensor, Normalize
from PIL import Image
img = Image.open('path/to/image.jpg')
trans = Compose([ToTensor(), Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])
input = trans(img).unsqueeze(0) # torch.randn(1, 3, 112, 112)
out = model(input)
```
|
lesso03/15f30735-cdd1-4e37-92a8-320c0353608f | lesso03 | "2025-04-15T03:24:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-15T02:54:35Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jgarciaa15/clasificationfilms | jgarciaa15 | "2023-11-11T04:26:51Z" | 0 | 0 | null | [
"art",
"es",
"arxiv:1910.09700",
"region:us"
] | null | "2023-11-11T04:08:48Z" | ---
language:
- es
tags:
- art
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Intellillama/Intellillama_Codellama_7B_Instruct_GPTQ | Intellillama | "2024-04-24T04:26:23Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"arxiv:2308.12950",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-04-24T02:15:31Z" |
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CodeLlama 7B Instruct - GPTQ
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [CodeLlama 7B Instruct](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Meta's CodeLlama 7B Instruct](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-7b-instruct-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: CodeLlama
```
[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
{prompt}
[/INST]
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 3.90 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/CodeLlama-7B-Instruct-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/CodeLlama-7B-Instruct-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/CodeLlama-7B-Instruct-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `CodeLlama-7B-Instruct-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/CodeLlama-7B-Instruct-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
{prompt}
[/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta's CodeLlama 7B Instruct
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [x] Instructions / chat.
- [ ] Python specialist.
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the Instruct version of the 7B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
madatnlp/ke-t5-scratch | madatnlp | "2022-05-09T10:52:51Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-08T02:59:40Z" | ---
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/ke-t5-scratch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/ke-t5-scratch
This model is a fine-tuned version of [madatnlp/ke-t5-math-py](https://huggingface.co/madatnlp/ke-t5-math-py) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4760
- Validation Loss: 0.7360
- Epoch: 36
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2751 | 2.1074 | 0 |
| 2.2716 | 1.7945 | 1 |
| 1.8889 | 1.5726 | 2 |
| 1.6760 | 1.3722 | 3 |
| 1.5021 | 1.3280 | 4 |
| 1.4369 | 1.2523 | 5 |
| 1.3352 | 1.0619 | 6 |
| 1.2749 | 1.1156 | 7 |
| 1.2170 | 1.0452 | 8 |
| 1.1713 | 1.0596 | 9 |
| 1.1410 | 1.0080 | 10 |
| 1.0884 | 1.0213 | 11 |
| 1.0508 | 0.9223 | 12 |
| 0.9933 | 0.9353 | 13 |
| 0.9871 | 0.8749 | 14 |
| 0.9251 | 0.9173 | 15 |
| 0.9282 | 0.8620 | 16 |
| 0.8849 | 0.8093 | 17 |
| 0.8613 | 0.7823 | 18 |
| 0.8322 | 0.8016 | 19 |
| 0.8070 | 0.8844 | 20 |
| 0.7737 | 0.7635 | 21 |
| 0.7465 | 0.8440 | 22 |
| 0.7178 | 0.7958 | 23 |
| 0.7036 | 0.7739 | 24 |
| 0.6813 | 0.7347 | 25 |
| 0.6597 | 0.7545 | 26 |
| 0.6427 | 0.7394 | 27 |
| 0.6154 | 0.7212 | 28 |
| 0.5892 | 0.7653 | 29 |
| 0.5696 | 0.7073 | 30 |
| 0.5644 | 0.6977 | 31 |
| 0.5307 | 0.6977 | 32 |
| 0.5159 | 0.7736 | 33 |
| 0.5131 | 0.8138 | 34 |
| 0.4812 | 0.7623 | 35 |
| 0.4760 | 0.7360 | 36 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lesso14/1c965b58-d0a0-414f-8d10-98b436e0c235 | lesso14 | "2025-02-22T05:36:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Genstruct-7B",
"base_model:adapter:NousResearch/Genstruct-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T04:49:35Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Genstruct-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c965b58-d0a0-414f-8d10-98b436e0c235
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: NousResearch/Genstruct-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e18b9f14afc97292_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e18b9f14afc97292_train_data.json
type:
field_instruction: source
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso14/1c965b58-d0a0-414f-8d10-98b436e0c235
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000214
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/e18b9f14afc97292_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 140
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 12217269-0b12-4e71-a14a-0b74606beadf
wandb_project: 14a
wandb_run: your_name
wandb_runid: 12217269-0b12-4e71-a14a-0b74606beadf
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c965b58-d0a0-414f-8d10-98b436e0c235
This model is a fine-tuned version of [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000214
- train_batch_size: 4
- eval_batch_size: 4
- seed: 140
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.9974 |
| 6.2005 | 0.0069 | 50 | 2.4747 |
| 6.1974 | 0.0139 | 100 | 2.3337 |
| 5.4742 | 0.0208 | 150 | 2.4632 |
| 5.5541 | 0.0278 | 200 | 2.3569 |
| 5.7224 | 0.0347 | 250 | 2.2257 |
| 6.5817 | 0.0417 | 300 | 2.1418 |
| 6.1084 | 0.0486 | 350 | 2.0608 |
| 6.0972 | 0.0556 | 400 | 2.0306 |
| 5.7962 | 0.0625 | 450 | 2.0216 |
| 5.822 | 0.0695 | 500 | 2.0204 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DrNicefellow/QwQ-32B-Preview-abliterated-2.0bpw-exl2 | DrNicefellow | "2024-12-07T02:21:01Z" | 7 | 0 | null | [
"safetensors",
"qwen2",
"base_model:huihui-ai/QwQ-32B-Preview-abliterated",
"base_model:quantized:huihui-ai/QwQ-32B-Preview-abliterated",
"license:apache-2.0",
"2-bit",
"exl2",
"region:us"
] | null | "2024-12-06T16:46:44Z" | ---
license: apache-2.0
base_model: huihui-ai/QwQ-32B-Preview-abliterated
---
This is a 2.0 bpw quantized version of [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffee or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
RatanRohith/NeuralPizza-7B-V0.1 | RatanRohith | "2024-01-12T17:00:51Z" | 1,371 | 3 | Transformers | [
"Transformers",
"safetensors",
"mistral",
"text-generation",
"transformers",
"fine-tuned",
"language-modeling",
"direct-preference-optimization",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-12T16:31:57Z" | ---
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
datasets:
- Intel/orca_dpo_pairs
license: apache-2.0
---
## Model Description
NeuralPizza-7B-V0.1 is a fine-tuned version of the SanjiWatsuki/Kunoichi-7B model, specialized through Direct Preference Optimization (DPO). It was fine-tuned using the Intel/orca_dpo_pairs dataset, focusing on enhancing model performance based on preference comparisons.
## Intended Use
This model is primarily intended for research and experimental applications in language modeling, especially for exploring the Direct Preference Optimization method. It provides insights into the nuances of DPO in the context of language model tuning.
## Training Data
The model was fine-tuned using the Intel/orca_dpo_pairs dataset. This dataset is designed for applying and testing Direct Preference Optimization techniques in language models.
## Training Procedure
The training followed the guidelines and methodologies outlined in the "Fine-Tune a Mistral 7B Model with Direct Preference Optimization" guide from Medium's Towards Data Science platform. Specific training regimes and hyperparameters are based on this guide. Here : https://medium.com/towards-data-science/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
## Limitations and Bias
As an experimental model, it may carry biases inherent from its training data. The model's performance and outputs should be critically evaluated, especially in sensitive and diverse applications. |
Triangle104/mistral-nemo-narwhal-12B-Q4_K_M-GGUF | Triangle104 | "2025-01-14T11:14:27Z" | 27 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/reddit-dpo",
"base_model:nbeerbower/mistral-nemo-narwhal-12B",
"base_model:quantized:nbeerbower/mistral-nemo-narwhal-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-14T11:12:29Z" | ---
license: apache-2.0
library_name: transformers
base_model: nbeerbower/mistral-nemo-narwhal-12B
datasets:
- nbeerbower/reddit-dpo
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/mistral-nemo-narwhal-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/mistral-nemo-narwhal-12B`](https://huggingface.co/nbeerbower/mistral-nemo-narwhal-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/mistral-nemo-narwhal-12B) for more details on the model.
---
Model details:
-
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on reddit-dpo.
Method
ORPO tuned with 8x A100 for 1 epoch.
QLoRA config:
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
# LoRA config
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)
Training config:
orpo_args = ORPOConfig(
run_name=new_model,
learning_rate=8e-6,
lr_scheduler_type="linear",
max_length=2048,
max_prompt_length=1024,
max_completion_length=1024,
beta=0.1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=1,
optim="paged_adamw_8bit",
num_train_epochs=2,
evaluation_strategy="steps",
eval_steps=0.2,
logging_steps=1,
warmup_steps=10,
max_grad_norm=10,
report_to="wandb",
output_dir="./results/",
bf16=True,
gradient_checkpointing=True,
)
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/mistral-nemo-narwhal-12B-Q4_K_M-GGUF --hf-file mistral-nemo-narwhal-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/mistral-nemo-narwhal-12B-Q4_K_M-GGUF --hf-file mistral-nemo-narwhal-12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/mistral-nemo-narwhal-12B-Q4_K_M-GGUF --hf-file mistral-nemo-narwhal-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/mistral-nemo-narwhal-12B-Q4_K_M-GGUF --hf-file mistral-nemo-narwhal-12b-q4_k_m.gguf -c 2048
```
|
nttx/6e539ebe-27a8-4df1-8a42-91718979fafd | nttx | "2025-01-16T03:11:43Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-16T03:04:09Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6e539ebe-27a8-4df1-8a42-91718979fafd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7ed010bf1fb524a9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7ed010bf1fb524a9_train_data.json
type:
field_instruction: chunk
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 8
eval_max_new_tokens: 128
eval_steps: 25
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/6e539ebe-27a8-4df1-8a42-91718979fafd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/7ed010bf1fb524a9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4adac9b8-76a8-405e-acc0-d52cc393b47b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4adac9b8-76a8-405e-acc0-d52cc393b47b
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6e539ebe-27a8-4df1-8a42-91718979fafd
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.1805 | 0.0073 | 1 | 1.9271 |
| 2.4426 | 0.1815 | 25 | 0.2614 |
| 0.363 | 0.3630 | 50 | 0.0962 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LieDeath/MergeStove2.5D | LieDeath | "2024-01-20T04:17:47Z" | 70 | 39 | diffusers | [
"diffusers",
"art",
"text-to-image",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-26T12:58:33Z" | ---
license: cc-by-nc-4.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
I found a new AI tool Shakker, a best image to image tool. You can try it via https://www.shakker.ai ,it can help you:
-Remix: Upload a picture. Just switch the prompts, and you can create stunning images in the same style.
-Style Transfer: Shakker not only extracts the style,but also switches among various styles.
Besides, Shakker also offers Object Control,Composition Control,Collage Redrawing etc.
# MergeStove2.5D(融合炉2.5D)
**Hatsune Miku, Thank you.**
It's time to say goodbye to MergeStove, sayolala. Thanks for your sincerely surpport. The **MK8** maybe the last MergeStove, and if I have enough time, I will reconstruct this Readme, including the previews of MK8.
是时候和MergeStove说再见了,感谢你们的陪伴。**MK8**可能会是最后一个MergeStove模型了,如果我有时间,我会把现在的Readme重构的,包括补上MK8的预览图。
MK7 is ready!!! In memory of my college entrance exam a total year ago. For previews, ALL here for MK7, just download and enjoy it. :)
MK7版本已发布,纪念一年前我的高考。预览图已补充,下载它,你会喜欢它的。:)
**Important** Use the negatives below for best performance of MK7. Other options are also available in the Selected Negative Prompts for MK7.txt
*badhandv4, EasyNegative, verybadimagenegative_v1.3,illustration, 3d, sepia, painting, cartoons, sketch, (worst quality:1.74), (low quality:1.74), (normal quality:1.44), lowres, bad anatomy, normal quality, ((monochrome)), ((grayscale)), ((letters)), ((english)), capital*
It contains 3 negative textural embeddings, which are **badhandv4, EasyNegative, verybadimagenegative_v1.3**, each of them can easily download on huggingface.
**重要** 使用上面的负面描述词以使MK7达到最佳效果。其他的可选负面描述词可以在Selected Negative Prompts for MK7.txt内查看。
它包含3个负面嵌入Embeddings,分别是**badhandv4, EasyNegative, verybadimagenegative_v1.3**,且每个都能轻松的在huggingface上下载到。
PS: MK5 and MK6 use these configs below will be much better.
提示:MK5和MK6使用以下设置可能会更好。
*Steps: 20, Sampler: Heun, CFG scale: 7, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 3, Hires upscaler: R-ESRGAN 4x+ Anime6B, Used embeddings: EasyNegative [119b]*
**mk6 reconstructed** its base model, which change to AbyssOrangeMix2_sfw. And with models new to here, it expands its knowledges, and which be **impressive** in extra-big pictures. I hope you can love it!
**mk6版更新重构了**它本身的基础模型,其中的AbyssOrangeMix2被更换为sfw版。还有我加入了很多新模型来扩展它的知识面,这使得mk6在超大图片中表现**惊艳**。
mk5 update, specially for **chinese friends**, quite a few improvements.
mk5版更新,是专门为了**中国朋友们**准备的,有非常多的改进。
MergeStove2.5D is a **merge** stable diffusion model specialized in **anime**, which improves anatomy of anime characters, especially with **eyes** and **hands**, without losing anime objects (like substances or charaters).
Much better for working at 0.9K-1.2K resoultion, or use Hires.fix instead. In another words, before Hires.fix, long side at 0.9k-1.2k, short side at 0.5k-0.7k resolutions are better.
Provide in 6 versions. Personally mk1 works better, but mk2 give out more vivid pictures. Previous update mk3 and mk4 are proudly do better in 2.5D figures. mk3 do better in generate body, but mk4 improve scene.
融合炉2.5D是一个**动漫风格特化**的稳定扩散模型,由**多个模型融合**而来,专门改善动漫人物的身体结构,特别是**眼睛**和**手**,同时不会丢失任何动漫中的对象(物体、人物等)。
其在900-1200像素的分辨率下工作较好,或者可以使用高清修复改善其高分辨率表现。换句话说,高清修复前长边900-1200像素,短边500-700像素这样子比较好。
提供6个版本。个人感觉mk1版工作的更好,但是mk2版本能生成更生动的图像。我可以很自豪的说,先前更新的mk3和mk4在2.5D人物中表现的更好。mk3有相对较好的人体,但是mk4改进了景物表现。
**No commercial usage! 严禁商用!**
# Preview(预览)
**Updates**
**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *demon tail, butterfly, tail, bug, 1girl, long hair, wristband, shoes, hatsune miku, shirt, choker, black legwear, aqua hair, bike shorts, solo, blue butterfly, twintails, black choker, bracelet, full body, black ribbon, cow tail, very long hair, tail ornament, jewelry, black bow, hair between eyes, ahoge, white shirt, earrings, grey background, tail bow, standing, jacket, shorts, collarbone, off shoulder, short sleeves, ribbon, black footwear, aqua eyes, gradient, bow, socks, looking at viewer*

**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *{masterpiece}, hatsune miku, sit on sakura tree branch, floating cyan long hair, wind flow, sakura petals floating, closed eyes, sun shine upward, shadows,white long dress, cloud sky with sun, hamony and peace, bare feet, medium breast*

**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *flying sweatdrops, long hair, blue hair, hair ornament, 1girl, english text, open mouth, closed eyes, phone, smile, cellphone, uniform, necktie, gloves, bangs, solo, blush, hatsune miku*

**Previous**
**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *close-up, upper body, blue eyes black middle, snow miku stand in right side of frame, starry night with distance snow mountains scene in left side of frame, solo charater, snow stage, thick coat long dress, shinny and vivid eyes, curly long aqua hair fall on ground, medium breasts, windless, floating snows, mountain right, snow forest*

**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *halo, [wings], leg tie, (hathatsune) miku, full body, long legs, [[lips]], red eyes, medium breasts, (white hair), (streaked blue) hair, round face, [ahoge], black gloves, (hathatsune) miku, closed mouth, full body, straight long 2 legs, starry night, bubble nebula,, [[lips]], lace long dress, small breasts, flat chest, flowers*

**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *solo, halo, feather wings, (hathatsune) miku, fox ears, straight long 2 legs, black long silk stocking, leg ring tie, full body, [[lips]], red eyes, medium breasts, ahoge, (white hair), (streaked blue) hair, round face, black gloves, closed mouth, starry night, bubble nebula, lace long dress, medium breasts, feathers*

**mk5** (after hi-res fix at 0.7)(高清修复比率0.7) *(masterpiece), (((a girl))), ((hatsune miku)), (smiling), ((shining red medium eyes)), medium breasts, pink lips, moon in the sky, dark night, blue flowers surround one's, (blue dress), (blue long hair), stars shining, green grassland, (stream in grassland), (one's stand in the grassland), face to viewer, black higheels, long legs, full body*

**mk5** (after hi-res fix at 0.6)(高清修复比率0.6) *hatsune miku, closed mouth, full body, straight long legs, starry night, bubble nebula,, [[lips]], black long dress*

**mk1** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, ruby eyes, face to viewer, solo, medium breasts, soft light, outdoors, garden, seaside, beauty*

**mk1** *miku, crystal eyes, upper body, face to viewer, solo, medium breasts, soft light, garden, seaside, ocean, bikini*

**mk1** *miku, crystal eyes, upper body, face to viewer, solo, medium breasts, soft light, outdoors, garden, seaside, beauty, blue white dress*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk3** (after hi-res fix at 0.7)(高清修复比率0.7) *hathatsune miku, seaside, shinny eyes, medium breasts, garden, ocean, seawind, soft sunset, beauty, beach shoes, short dress*

**mk3** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, seaside, shinny eyes, medium breasts, bikini, surfing, on surfing board, wave, seawind, (wet body:0.75), (🏄🏻:0.66)*

**mk4** (after hi-res fix at 0.7)(高清修复比率0.7) *hathatsune miku, seaside, shinny eyes, medium breasts, garden, ocean, seawind, soft sunset, beauty, beach shoes, short dress*

**mk4** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, seaside, shinny eyes, medium breasts, bikini, bare feet, (surfing), (on 1_surfing_board), wave, seawind, wet body, liquid on cloth, see through*

# Usage(使用方法)
Use as normal stable diffusion model package v1.x, no external yaml config is needed.
**Recommand settings: Steps: 9-28, Sampler: DPM++ SDE Karras, CFG scale: 5-16, Denoising strength: 0.6-0.7, Hires upscale: 2, Hires upscaler: Latent**
用作正常的稳定扩散模型包v1.x,无需额外的YAML配置文件。
**推荐设置:迭代步数:9-28,采样器:DPM++ SDE Karras,提示词相关性:5-16,去噪强度:0.6-0.7,高清修复放大倍率:2,高清修复放大器:Latent**
# Tags(描述词)
Positives as you like, maybe less quality words works better. You can get inspirations from upper descriptions.
**Negatives better to use the basic prompts, or just replace as bad_prompt embedding.**
**Negatives Example:** *(bad_prompt), cleavage, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
正面填写你喜欢的描述词,也许更少的质量描述词能使其工作的更好。你可以在上面的预览图描述词中得到灵感。
**负面描述词最好用基本负面,或者简单的把它们替换成bad_prompt这个嵌入模型。**
**负面描述词示例:** *(bad_prompt), cleavage, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
**Use "blue eyes black middle" description can get huge improvement on pupil at low resolution! Colors can change as your preferance.**
**使用"blue eyes black middle"这样子的描述词可在低分辨率下极大的改善对瞳孔的描绘!颜色可以改为你喜欢的。**
Here are the **better negatives**, thanks andite: *lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))*
这里是**更好的负面描述词**,谢谢andite:*lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))*
From NovelAI 中文频道, I got some **even better negative prompts**. That is it, *EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, ((Bare nipple)), nsfw, (three arms), Many hands, (Many arms), ((watermarking)), (inaccurate limb:1.2)*
Note, it use the **EasyNegative** embbedings, which you need to download manually. It is also a well working filter on nsfw contants.
我在NovelAI 中文频道找到了一些**还要更好的负面描述词**。它们在这里, *EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, ((Bare nipple)), nsfw, (three arms), Many hands, (Many arms), ((watermarking)), (inaccurate limb:1.2)*
注意,它使用了**EasyNegative**这个嵌入模型,你需要手动下载它。这些描述词还能更好的过滤成人内容。
# Bias(不足)
**Notice:** Definitely important to enable the **Hires.fix**, especially on the **mk5 and mk6**. Or low quality images will be generated!!!
**注意:** 启用**高清修复**至关重要,特别是在**mk5和mk6**上。不然会产生低质量图片!!!
**include nsfw contents due to its original models!**
**DO NOT USE your generated pictures for Pirate human artists or any Internet Violence! Such as on Bilibili or Youtube.**
Sometimes long necks appear. Still hazy a bit. Under some theme will produce wrong skin gloss. Sometimes overfitting. Often produce Unhuman Size Breasts girl pictures unless use cleavage tag in negative.
**含有成人内容,由于其原始模型本身的不足!**
**请勿把你用本模型生成的图像用于嘲讽人类画师或者其他任何形式的网络暴力!例如在Bilibili或者Youtube上。**
有时会生成过长的脖子。仍然有点模糊。在某些特定场景会产生错误的皮肤光泽。有时生成的图像会过拟合训练集内版权图片。经常会生成非人类大小的乳房(USB)的女性图片,除非在负面描述词中使用cleavage这个标签。
# Formula(融合配方)
**Round1** animefull-latest(NovelAI)+64in1(Private, from a Chinese AI community NovelAI 中文频道) sum rate0.4
**Round2** ()+AbyssOrangemix2_nsfw(WarriorMama777) sum rate0.2
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk1.
**第一轮** animefull-latest(NovelAI)+64in1(私有,来自中国AI社区NovelAI 中文频道) 加权和模式 比率0.4
**第二轮** ()+AbyssOrangemix2_nsfw(WarriorMama777) 加权和模式 比率0.2
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk1模型。
**Round3A** MergeStove2.5D_mk1+Anmokomergetest1(Private, from a Chinese AI community NovelAI 中文频道, Download [Anmokomergetest1](https://huggingface.co/LieDeath/Anmokomergetest1).) sum rate0.4
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk2.
**第三轮A** MergeStove2.5D_mk1+Anmokomergetest1(私有,来自中国AI社区NovelAI 中文频道,下载[Anmokomergetest1](https://huggingface.co/LieDeath/Anmokomergetest1)。) 加权和模式 比率0.4
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk2模型。
**Round3B** MergeStove2.5D_mk1+uberRealisticPornMer_urpMv11(Civitai, from saftle) sum rate 0.1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk3.
**第三轮B** MergeStove2.5D_mk1+uberRealisticPornMer_urpMv11(来自CivitAI的saftle) 加权和模式 比率0.1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk3模型。
**Round4B** MergeStove2.5D_mk3+momoko-e(Anonymous) sum rate 0.1
**Round5B** ()+Protogen_V2.2(darkstorm2150) sum rate 0.1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk4.
**第四轮B** MergeStove2.5D_mk3+momoko-e(匿名) 加权和模式 比率0.1
**第五轮B** ()+Protogen_V2.2(darkstorm2150) 加权和模式 比率0.1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk4模型。
**Round4A** MergeStove2.5D_mk2+chilloutmix_Ni(Civitai, from tasuku) sum rate 0.1
**Round5A** ()+laolei-new-berry-protogen mix(Civitai, from hokono) sum rate 0.1
**Round6A** ()+pastelmix(andite) sum rate 0.05
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, get MergeStove2.5D_mk5.
**第四轮A** MergeStove2.5D_mk2+chilloutmix_Ni(来自CivitAI的tasuku) 加权和模式 比率0.1
**第五轮A** ()+laolei-new-berry-protogen mix(来自CivitAI的hokono) 加权和模式 比率0.1
**第六轮A** ()+pastelmix(andite) 加权和模式 比率0.05
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,得到MergeStove2.5D_mk5模型。
**Special:** AbyssOrangemix2_sfw works better at all these above MergeStove2.5D series. Only Round6A works at FP32 mode.
**注意:** AbyssOrangemix2_sfw在上面所有的MergeStove2.5D系列融合模型中工作的更好。只有第六轮A使用了FP32融合模式。
**Roundx** Replace AbyssOrangeMix2_nsfw with AbyssOrangeMix2_sfw and Reconstructed mk5 with full FP32, get modelx.
**Round7x** modelx+Nothing-V0.3(Chinese, Anonymous) sum rate 0.1
**Round8x** ()+7th_anime_v2_A(syaimu) sum rate 0.1
**Round9x** ()+mdjrny-v4(Anonymous) mbw in4 rate 1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, get MergeStove2.5D_mk6.
**第x轮** 把AbyssOrangeMix2_nsfw替换为AbyssOrangeMix2_sfw,然后用全FP32格式重构mk5,得到modelx。
**第七轮x** modelx+Nothing-V0.3(来自中国,匿名) 加权和模式 比率0.1
**第八轮x** ()+7th_anime_v2_A(syaimu) 加权和模式 比率0.1
**第九轮x** ()+mdjrny-v4(Anonymous) MBW插件 仅调整in4层 比率1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,得到MergeStove2.5D_mk6模型。 |
nayohan/llama3-8b-it-prometheus-ko | nayohan | "2024-05-02T21:33:20Z" | 193 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"eval",
"llm-eval",
"conversational",
"en",
"dataset:nayohan/feedback-collection-ko",
"dataset:nayohan/feedback-collection-ko-chat",
"arxiv:2310.08491",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-01T14:08:16Z" | ---
language:
- en
- ko
license: llama3
library_name: transformers
tags:
- ko
- eval
- llm-eval
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- nayohan/feedback-collection-ko
- nayohan/feedback-collection-ko-chat
pipeline_tag: text-generation
---
# **Introduction**
This model translated the [prometheus-eval/Feedback-Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) dataset into Korean and trained on the llama3-8b-it model.
Train Dataset: [nayohan/feedback-collection-ko](https://huggingface.co/datasets/nayohan/feedback-collection-ko)
### **Loading the Model**
Use the following Python code to load the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "nayohan/llama3-8b-it-prometheus-ko"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
```
### **Generating Text**
System prompt is fixed, and you can set the score rubric according to the given task, and then change the orig_instruction, orig_response, and orig_reference_answer to evaluate it.
```python
system_prompt = """###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations."""
sample = {
'orig_instruction': "나는 첨단 기술 프로젝트를 진행하는 팀에 있다. 그러나 최근 프로젝트 방향을 놓고 팀원들 사이에 지속적인 갈등이 발생하고 있다. 한 그룹은 급진적이고 위험하지만 잠재적으로 게임을 바꿀 수 있는 접근법을 강력하게 옹호하고 있다. 대조적으로, 다른 그룹은 보다 측정되고 더 안전하며 입증된 전략을 선호한다. 결과적으로 우리 팀은 분열되어 진전을 이룰 수 없다. 우리의 대화를 중재하고 해결을 이끌어낼 수 있는 AI 모델이 필요하다. 이러한 상황에 대응하여 AI 모델은 무엇을 말해야 하는가?",
'orig_response': "그러니까 프로젝트 방향에 합의가 안 되는 팀에 있는 거 아니야? 다들 잘 맞도록 배워야 할 것 같네요. 어쩌면 동전을 던지고 어느 쪽이 승리하는지 봐야 할 것 같아요. 그렇게 하면 논쟁이 없고 모두가 일터로 돌아갈 수 있습니다. 위험하든 안전하든 상관없어요. 하나를 골라서 그냥 가세요. 게다가, 모든 것이 무너지면 서로 비난하고 넘어갈 수 있습니다. 아니면 더 좋은 것은, 어떤 그룹의 아이디어가 더 나은지 보기 위한 경쟁이 왜 안 돼? 패배자는 우승자를 위해 점심을 사야 해요.",
'orig_reference_answer': "이 팀의 모든 사람들이 프로젝트에 열정적이고 성공하기를 원한다는 것은 분명하며, 이는 모든 해결의 훌륭한 출발점이다. 또한 갈등은 위험과 혁신에 대한 서로 다른 관점에서 발생한다는 것도 분명합니다. 둘 다 프로젝트의 성공에 중요한 고려 사항입니다. 두 접근법 모두에서 유효한 점을 인정하는 것으로 시작하겠습니다. 급진적인 접근법을 옹호하는 팀은 높은 보상과 획기적인 혁신의 잠재력에 의해 주도되며, 이는 모든 첨단 프로젝트에서 훌륭하고 필수적입니다.",
'orig_criteria':'모형은 대화에서 갈등 해결을 얼마나 효과적으로 처리하는가?',
'orig_score1_description':'모델은 갈등이나 오해를 가중시켜 문제를 중재하거나 해결할 수 있는 능력을 보이지 않는다.',
'orig_score2_description':'이 모델은 갈등에 대한 인식이 있지만 이를 해결하려는 시도는 효과가 없거나 잘못된 지침을 가지고 있다.',
'orig_score3_description':'이 모델은 갈등을 적당히 처리하여 일부 성공적인 해결 전술을 보여주지만 더 일관성이 있을 수 있다.',
'orig_score4_description':'이 모델은 갈등을 잘 처리하여 긴장을 확산시키고 해결을 효과적으로 안내하지만 미세한 미끄럼이 있습니다.',
'orig_score5_description':'이 모델은 갈등을 훌륭하게 관리하고, 지속적으로 긴장을 확산시키며, 대화를 타협으로 안내하고 긍정적인 대화 환경을 조성한다.',
'orig_feedback': '제공된 응답은 당면한 문제를 조정하거나 해결하는 능력을 보여주지 않는다. 대신 팀의 우려를 사소화하고 잠재적인 결과에 대한 고려 없이 동전을 던지거나 대회를 개최하는 것과 같은 비건설적 솔루션을 제안한다. 또한 응답은 상황이 잘못되면 팀 구성원들이 서로를 비난해야 한다는 것을 암시한다. 갈등을 더욱 악화시킨다. 건설적인 대화를 장려하거나 두 접근법 사이의 중간 지점을 찾는 것의 중요성을 인정하지 않는다. 따라서 전체 점수는 1이다.',
'orig_score': 1,
}
instruction = f"""###The instruction to evaluate: {sample['orig_instruction']}
###Response to evaluate: {sample['orig_response']}
###Reference Answer (Score 5): {sample['orig_reference_answer']}
###Score Rubrics: [{sample['orig_criteria']}]
Score 1: {sample['orig_score1_description']}
Score 2: {sample['orig_score2_description']}
Score 3: {sample['orig_score3_description']}
Score 4: {sample['orig_score4_description']}
Score 5: {sample['orig_score5_description']}
###Feedback:"""
# for training
# output = f"""{sample['orig_feedback']}
# [RESULT] {sample['orig_score']}"""
conversation = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": instruction},
# {"role": "assistant", "content": output}
]
input_ids = tokenizer.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_tensors='pt'
).to("cuda")
output = model.generate(input_ids, max_new_tokens=512)
output_text = tokenizer.decode(output[0][len(input_ids[0]):], skip_special_tokens=True)
print(output_text)
```
If you don't have a reference text, it can work without one. The model evaluates orig_response, the sentence after orig_instruction. Use the following template code.
```python
instruction = f"""###The instruction to evaluate: {sample['orig_instruction']}
###Response to evaluate: {sample['orig_response']}
###Score Rubrics: [{sample['orig_criteria']}]
Score 1: {sample['orig_score1_description']}
Score 2: {sample['orig_score2_description']}
Score 3: {sample['orig_score3_description']}
Score 4: {sample['orig_score4_description']}
Score 5: {sample['orig_score5_description']}
###Feedback:"""
```
The model was trained with feedback truncated, feedback can sometimes be represented with some truncation.
```
# Result with orig_reference_answer
# OUTPUT: 이 대응은 갈등 해결에 대한 이해가 부족함을 보여준다. 동전을 던지거나 경쟁을 제안하는 것과 같이 제공된 제안은 문제의 복잡성을 무시하고 팀 내의 다양한 관점을 무시한다. 응답은 두 접근법의 잠재적 가치를 인정하지 않으며 팀 구성원 간의 이해와 존중을 촉진하지도 않는다. 또한 응답은 팀의 열정과 프로젝트에 대한 헌신을 인정하지 않는다. 따라서 전체 점수는 1이다.
[RESULT] 1
# Result without orig_reference_answer
# OUTPUT: 대응은 갈등 해결에 대한 이해를 나타내지 않는다. AI 모델은 갈등을 해결하기보다는 갈등을 악화시키는 것을 제안하며, 이는 점수 루브릭에 따라 요구 사항에 어긋난다. 동전을 던지고 경쟁을 제안하는 것은 팀 구성원 간의 긴장을 확산시키는 데 도움이 되지 않고 오히려 더 많은 갈등을 촉발할 수 있다. 또한, 팀 구성원이 더 나은 아이디어를 갖는 것이 아니라 "더 나은" 아이디어를 갖는다는 것을 암시하는 것은 팀 구성원 간의 화합을 촉진하지 않는다. 따라서 전체 점수는 1이다.
[RESULT] 1
```
If you just want to get a score from the evaluation, you can use the following extract_score function.
```python
import re
def extract_score(text):
pattern = re.compile(r'\[RESULT\]\s+([0-5])')
match = pattern.search(text)
if match:
score = int(match.group(1))
else: score=0
return score
predict_score = extract_score(output_text)
print(predict_score) # 1
```
### **Heatmap Visualize**
[eng->eng] we randomly sampled 200 evalset from the [training data](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection), extracted scores from the model-generated sentences, and compared them to the correct answers. The training and test datasets are not separated, so we can only see how well the model learned.
[ko->ko] sampled 200 evalset in this [testset](https://huggingface.co/datasets/nayohan/feedback-collection-ko-chat/viewer/default/test). llama3-8b-it-prometheus-ko only use train set.
- prometheus-7b-v1.0 (english train-> english inference) # 3 failed to output a score, total 197
- llama3-8b-it-prometheus-ko (korean train-> korean inference) # total 200

### **Citation**
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Our trainig code can be found here: [TBD] |
amjadfqs/finalProject | amjadfqs | "2023-06-16T22:28:48Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-06-15T17:30:57Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
model-index:
- name: finalProject
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9890023566378633
- name: Precision
type: precision
value: 0.9894345375382527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finalProject
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9890
- F1 Score: 0.9892
- Precision: 0.9894
- Sensitivity: 0.9891
- Specificity: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:|
| 0.3384 | 1.0 | 30 | 0.2387 | 0.9144 | 0.9163 | 0.9197 | 0.9146 | 0.9781 |
| 0.1608 | 2.0 | 60 | 0.1635 | 0.9466 | 0.9476 | 0.9485 | 0.9474 | 0.9865 |
| 0.0953 | 3.0 | 90 | 0.0915 | 0.9698 | 0.9703 | 0.9706 | 0.9706 | 0.9924 |
| 0.0573 | 4.0 | 120 | 0.1125 | 0.9607 | 0.9617 | 0.9634 | 0.9621 | 0.9901 |
| 0.0335 | 5.0 | 150 | 0.0536 | 0.9827 | 0.9831 | 0.9837 | 0.9826 | 0.9957 |
| 0.0185 | 6.0 | 180 | 0.0543 | 0.9827 | 0.9830 | 0.9837 | 0.9825 | 0.9957 |
| 0.0226 | 7.0 | 210 | 0.0478 | 0.9859 | 0.9861 | 0.9866 | 0.9856 | 0.9965 |
| 0.0131 | 8.0 | 240 | 0.0468 | 0.9843 | 0.9846 | 0.9847 | 0.9846 | 0.9961 |
| 0.0087 | 9.0 | 270 | 0.0411 | 0.9890 | 0.9892 | 0.9894 | 0.9891 | 0.9972 |
| 0.0043 | 10.0 | 300 | 0.0376 | 0.9886 | 0.9888 | 0.9890 | 0.9887 | 0.9971 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
John6666/roar-realm-of-awesome-realism-v30-sdxl | John6666 | "2024-07-28T20:50:23Z" | 56 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"scifi",
"fantasy",
"landscapes",
"characters",
"versatile",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-28T20:45:02Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- scifi
- fantasy
- landscapes
- characters
- versatile
---
Original model is [here](https://civitai.com/models/393488?modelVersionId=679006).
|
vuongnhathien/test-more-augment | vuongnhathien | "2024-05-27T15:17:34Z" | 191 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-base-22k-384",
"base_model:finetune:facebook/convnextv2-base-22k-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-27T15:14:48Z" | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-384
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-more-augment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-more-augment
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.9146 | 0.7125 |
| No log | 2.0 | 80 | 0.7713 | 0.8 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
msthil/poca-SoccerTwos | msthil | "2023-04-14T21:11:55Z" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-04-14T21:11:49Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: msthil/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
debin/dqn-SpaceInvadersNoFrameskip-v4 | debin | "2023-06-22T19:59:48Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-22T19:56:25Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 268.50 +/- 68.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga debin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga debin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga debin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 25000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
huggingtweets/usmnt-zacksteffen_ | huggingtweets | "2022-05-04T17:19:08Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-05-04T17:18:29Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/usmnt-zacksteffen_/1651684743123/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & Zack Steffen</div>
<div style="text-align: center; font-size: 14px;">@usmnt-zacksteffen_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from USMNT & Zack Steffen.
| Data | USMNT | Zack Steffen |
| --- | --- | --- |
| Tweets downloaded | 3250 | 3120 |
| Retweets | 600 | 869 |
| Short tweets | 215 | 523 |
| Tweets kept | 2435 | 1728 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34uud8si/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usmnt-zacksteffen_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/usmnt-zacksteffen_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Semojak/GK_KcELECTRABase_ver2 | Semojak | "2024-11-11T21:05:03Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-11T21:04:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Chayaaaaa/mistral-7b-shisa-7b-v1 | Chayaaaaa | "2024-04-11T08:55:07Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ajibawa-2023/Code-Mistral-7B",
"augmxnt/shisa-7b-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-11T08:51:11Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ajibawa-2023/Code-Mistral-7B
- augmxnt/shisa-7b-v1
---
# mistral-7b-shisa-7b-v1
mistral-7b-shisa-7b-v1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [ajibawa-2023/Code-Mistral-7B](https://huggingface.co/ajibawa-2023/Code-Mistral-7B)
* [augmxnt/shisa-7b-v1](https://huggingface.co/augmxnt/shisa-7b-v1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ajibawa-2023/Code-Mistral-7B
layer_range: [0, 32]
- model: augmxnt/shisa-7b-v1
layer_range: [0, 32]
merge_method: slerp
base_model: ajibawa-2023/Code-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Gayathri142214002/t5_Comp_Question_Generation_4 | Gayathri142214002 | "2023-10-10T14:26:47Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-10-10T13:55:52Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_Comp_Question_Generation_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_Comp_Question_Generation_4
This model is a fine-tuned version of [Gayathri142214002/t5_Comp_Question_Generation_3](https://huggingface.co/Gayathri142214002/t5_Comp_Question_Generation_3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF | mradermacher | "2024-10-11T18:57:16Z" | 29 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:lemon07r/Gemma-2-Ataraxy-v3i-9B",
"base_model:quantized:lemon07r/Gemma-2-Ataraxy-v3i-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-10-06T06:35:21Z" | ---
base_model: lemon07r/Gemma-2-Ataraxy-v3i-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3i-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v3i-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v3i-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
epinnock/codellama-70-evol-feedback-lora | epinnock | "2024-01-30T06:44:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-30T06:38:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sokonana/distilbert-base-uncased-finetuned-emotion | sokonana | "2024-09-10T15:34:00Z" | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-08T16:01:56Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1791
- Accuracy: 0.9325
- F1: 0.9326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2398 | 1.0 | 250 | 0.2386 | 0.912 | 0.9134 |
| 0.1611 | 2.0 | 500 | 0.1875 | 0.9255 | 0.9255 |
| 0.1296 | 3.0 | 750 | 0.1877 | 0.924 | 0.9245 |
| 0.1069 | 4.0 | 1000 | 0.1791 | 0.9325 | 0.9326 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.16.1
- Tokenizers 0.19.1
|
rodekruis/nlrc-pmer-midmat-labels | rodekruis | "2024-06-26T13:11:34Z" | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2024-06-26T13:11:00Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# wdejong/midmat_labels
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("wdejong/midmat_labels")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
MaziyarPanahi/MistraMystic-GGUF | MaziyarPanahi | "2024-11-01T04:24:58Z" | 30 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:choco58/MistraMystic",
"base_model:quantized:choco58/MistraMystic",
"region:us",
"conversational"
] | text-generation | "2024-11-01T04:03:27Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: MistraMystic-GGUF
base_model: choco58/MistraMystic
inference: false
model_creator: choco58
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MistraMystic-GGUF](https://huggingface.co/MaziyarPanahi/MistraMystic-GGUF)
- Model creator: [choco58](https://huggingface.co/choco58)
- Original model: [choco58/MistraMystic](https://huggingface.co/choco58/MistraMystic)
## Description
[MaziyarPanahi/MistraMystic-GGUF](https://huggingface.co/MaziyarPanahi/MistraMystic-GGUF) contains GGUF format model files for [choco58/MistraMystic](https://huggingface.co/choco58/MistraMystic).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
owanr/ghc-roberta-base-inter-shuffle-model_annots_alpha0.8_whole_1e-05 | owanr | "2023-12-16T04:37:37Z" | 0 | 0 | null | [
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | "2023-12-15T10:29:02Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: ghc-roberta-base-inter-shuffle-model_annots_alpha0.8_whole_1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ghc-roberta-base-inter-shuffle-model_annots_alpha0.8_whole_1e-05
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4907 | 1.0 | 11020 | 2.3848 |
| 2.5894 | 2.0 | 22040 | 2.6244 |
| 2.4341 | 3.0 | 33060 | 2.6244 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Prisma-Multimodal/celeba-sae-top_k-64-patches_only-layer_6-hook_resid_post-64-84 | Prisma-Multimodal | "2025-01-19T04:50:45Z" | 5 | 0 | null | [
"region:us"
] | null | "2025-01-19T04:50:36Z" | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 6
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: False
### Training
- **Training Images**: 648254
- **Learning Rate**: 0.0011
- **L1 Coefficient**: 0.0002
- **Batch Size**: 4096
- **Context Size**: 49
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 64
- **Dead Features**: 0
- **Mean Log10 Feature Sparsity**: -3.2826
- **Features Below 1e-5**: 5
- **Features Below 1e-6**: 0
- **Mean Passes Since Fired**: 0.5183
### Reconstruction
- **Explained Variance**: 0.8467
- **Explained Variance Std**: 0.0500
- **MSE Loss**: 0.0014
- **L1 Loss**: 0
- **Overall Loss**: 0.0014
## Training Details
- **Training Duration**: 2006 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 500
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/celeba_checkpoints_2/d2da2fec-tinyclip_sae_16_hyperparam_sweep_lr/n_images_648338.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/celeba-patches_remaining_layers/runs/o55or0au
- **Random Seed**: 42
|
friendshipkim/Llama-3.1-8B-Instruct-pruned-h0.43-i0.43-a0.0-d0.0-bf16 | friendshipkim | "2025-03-04T00:13:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T00:11:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jerryw/my_bert-base-cased | jerryw | "2022-08-04T01:38:04Z" | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-08-04T01:34:19Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my_bert-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_bert-base-cased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pfactorial/checkpoint-22500-epoch-20 | pfactorial | "2022-05-03T05:48:55Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-03T03:25:44Z" | this is a Questions generating mode
|
musketshugging/qwen-poker-tuned-v3 | musketshugging | "2025-03-16T11:07:52Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-03-16T11:00:51Z" |
# Qwen2-7B Poker Tuned Model v2
This is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct optimized for poker gameplay.
The model has been trained to play poker and make strategic decisions in a Texas Hold'em poker game.
Original adapter weights were loaded from the local directory: poker-trained-bot-v3/checkpoint-2812
|
rameshsubrahmanyam/gemma-2-2B-it-thinking-function_calling-V0 | rameshsubrahmanyam | "2025-03-30T01:31:11Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | "2025-03-30T01:29:35Z" | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rameshsubrahmanyam/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf | RichardErkhov | "2024-10-06T13:20:13Z" | 75 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-10-06T11:06:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-2-electrical-engineering - GGUF
- Model creator: https://huggingface.co/STEM-AI-mtl/
- Original model: https://huggingface.co/STEM-AI-mtl/phi-2-electrical-engineering/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-2-electrical-engineering.Q2_K.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q2_K.gguf) | Q2_K | 1.01GB |
| [phi-2-electrical-engineering.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.IQ3_XS.gguf) | IQ3_XS | 1.14GB |
| [phi-2-electrical-engineering.IQ3_S.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.IQ3_S.gguf) | IQ3_S | 1.16GB |
| [phi-2-electrical-engineering.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [phi-2-electrical-engineering.IQ3_M.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.IQ3_M.gguf) | IQ3_M | 1.28GB |
| [phi-2-electrical-engineering.Q3_K.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q3_K.gguf) | Q3_K | 1.38GB |
| [phi-2-electrical-engineering.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [phi-2-electrical-engineering.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [phi-2-electrical-engineering.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [phi-2-electrical-engineering.Q4_0.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q4_0.gguf) | Q4_0 | 1.49GB |
| [phi-2-electrical-engineering.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [phi-2-electrical-engineering.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [phi-2-electrical-engineering.Q4_K.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q4_K.gguf) | Q4_K | 1.67GB |
| [phi-2-electrical-engineering.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q4_K_M.gguf) | Q4_K_M | 1.67GB |
| [phi-2-electrical-engineering.Q4_1.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q4_1.gguf) | Q4_1 | 1.65GB |
| [phi-2-electrical-engineering.Q5_0.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q5_0.gguf) | Q5_0 | 1.8GB |
| [phi-2-electrical-engineering.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [phi-2-electrical-engineering.Q5_K.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q5_K.gguf) | Q5_K | 1.93GB |
| [phi-2-electrical-engineering.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [phi-2-electrical-engineering.Q5_1.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q5_1.gguf) | Q5_1 | 1.95GB |
| [phi-2-electrical-engineering.Q6_K.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q6_K.gguf) | Q6_K | 2.13GB |
| [phi-2-electrical-engineering.Q8_0.gguf](https://huggingface.co/RichardErkhov/STEM-AI-mtl_-_phi-2-electrical-engineering-gguf/blob/main/phi-2-electrical-engineering.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
license: other
license_name: stem.ai.mtl
license_link: LICENSE
language:
- en
tags:
- phi-2
- electrical engineering
- Microsoft
datasets:
- STEM-AI-mtl/Electrical-engineering
- garage-bAInd/Open-Platypus
task_categories:
- question-answering
- text-generation
pipeline_tag: text-generation
widget:
- text: "Enter your instruction here"
inference: true
auto_sample: true
inference_code: chat-GPTQ.py
library_tag: transformers
---
# For the electrical engineering community
A unique, deployable and efficient 2.7 billion parameters model in the field of electrical engineering. This repo contains the adapters from the LoRa fine-tuning of the phi-2 model from Microsoft. It was trained on the [STEM-AI-mtl/Electrical-engineering](https://huggingface.co/datasets/STEM-AI-mtl/Electrical-engineering) dataset combined with [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
- **Developed by:** STEM.AI
- **Model type:** Q&A and code generation
- **Language(s) (NLP):** English
- **Finetuned from model:** [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
### Direct Use
Q&A related to electrical engineering, and Kicad software. Creation of Python code in general, and for Kicad's scripting console.
Refer to [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) model card for recommended prompt format.
### Inference script
[Standard](https://github.com/STEM-ai/Phi-2/blob/4eaa6aaa2679427a810ace5a061b9c951942d66a/chat.py)
[GPTQ format](https://github.com/STEM-ai/Phi-2/blob/ab1ced8d7922765344d824acf1924df99606b4fc/chat-GPTQ.py)
## Training Details
### Training Data
Dataset related to electrical engineering: [STEM-AI-mtl/Electrical-engineering](https://huggingface.co/datasets/STEM-AI-mtl/Electrical-engineering)
It is composed of queries, 65% about general electrical engineering, 25% about Kicad (EDA software) and 10% about Python code for Kicad's scripting console.
In additionataset related to STEM and NLP: [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
### Training Procedure
[LoRa script](https://github.com/STEM-ai/Phi-2/blob/4eaa6aaa2679427a810ace5a061b9c951942d66a/LoRa.py)
A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.
## Model Card Authors
STEM.AI: [email protected]\
[William Harbec](https://www.linkedin.com/in/william-harbec-56a262248/)
|
jssky/7cc6b5f6-3880-42e3-a487-30163acf6b99 | jssky | "2025-04-11T09:07:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-11T08:31:26Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
blockblockblock/TinyLlama-1.1B-32k-Instruct-bpw4.6 | blockblockblock | "2024-03-13T05:16:56Z" | 5 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/airoboros-3.2",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-03-13T05:16:36Z" | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
datasets:
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an instruct-tuned [TinyLlama-1.1B-32k](https://huggingface.co/Doctor-Shotgun/TinyLlama-1.1B-32k) on several open-source instruct datasets, intended primarily for speculative decoding.
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a full finetune for 3 epochs using a single A100 GPU for around 3.5 hours. |
mradermacher/based-13b-i1-GGUF | mradermacher | "2025-03-15T22:30:56Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ehartford/based",
"base_model:cognitivecomputations/based-13b",
"base_model:quantized:cognitivecomputations/based-13b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-15T15:54:12Z" | ---
base_model: cognitivecomputations/based-13b
datasets:
- ehartford/based
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/based-13b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/based-13b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/based-13b-i1-GGUF/resolve/main/based-13b.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF | mradermacher | "2025-04-12T09:57:58Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Qanadil/qwen2.5-0.5b-instruct-arabic",
"base_model:quantized:Qanadil/qwen2.5-0.5b-instruct-arabic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-12T09:53:21Z" | ---
base_model: Qanadil/qwen2.5-0.5b-instruct-arabic
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qanadil/qwen2.5-0.5b-instruct-arabic
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qwen2.5-0.5b-instruct-arabic-GGUF/resolve/main/qwen2.5-0.5b-instruct-arabic.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mwitiderrick/zephyr-7b-beta-llamini | mwitiderrick | "2023-10-27T11:09:21Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-27T09:58:34Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
kristonai/falco | kristonai | "2023-09-07T06:06:41Z" | 0 | 0 | null | [
"license:bsd",
"region:us"
] | null | "2023-08-21T07:08:02Z" | ---
license: bsd
---
# Model Card for FALCO-TTS
<!-- Provide a quick summary of what the model is/does. -->
This model implements a three-stage, SPEAR-TTS-like model, supporting zero-shot and cross-language speech synthesis. </p>
We trained this model on the corpus MLS (https://openslr.org/94/) and WenetSpeech (https://openslr.org/121/), utilizing about 20,000 hours data, including English and Mandarin part. </p>
This model have the auto code-switch capability.
## Model Details
|Model |Parameters |Attention |Output Vocab size
|:--- |:---- |:--- |:---
|text_to_semantic |240 M |Causal |1024
|semantic_to_acoustic |370 M |Causal |8x 1,024 |
bpavlsh/bart-crypto-summary | bpavlsh | "2024-11-29T20:38:32Z" | 126 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"news analytics",
"cryptocurrency",
"crypto",
"Bitcoin",
"Ethereum",
"Seq2Seq",
"en",
"arxiv:2308.13032",
"arxiv:2309.04704",
"arxiv:2201.02729",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-29T19:18:22Z" | ---
library_name: transformers
tags:
- news analytics
- cryptocurrency
- crypto
- Bitcoin
- Ethereum
- Seq2Seq
language:
- en
base_model:
- facebook/bart-large
---
# Seq2Seq Model bpavlsh/bart-crypto-summary
### Model Description
Fine-tuned Seq2Seq model is developed for analysing and summarization of cryptocurrency news for the following crypto coins:
Bitcoin, Ethereum, Tether, Solana, Binance Coin. Max input size for texts is 1024 tokens that is about
3.5K chars of texts. Model is created by fine-tuning facebook/bart-large transformer model.
Model outputs short text summary and uptrend/downtrend lists of specified above crypto coins if their trends are considered in the news text.
## How to Get Started with the Model
Use the code below to get started with the model:
```python
summarizer = pipeline("summarization", model = "bpavlsh/bart-crypto-summary")
txt="""
Crypto market shows mixed signals. Bitcoin (BTC) and Ethereum (ETH) is experiencing a slight downturn, weighed down by bearish
investor sentiment, while Solana (SOL) see sharp uptrends driven by increased on-chain activity.
"""
result=summarizer(txt, early_stopping=True)[0]['summary_text']
print(result)
Result:
"""
Bitcoin and Ethereum are experiencing a slight downturn with bearish investor sentiment, while Solana shows a strong uptrend driven by increased on-chain activity.
Uptrend: Solana.
Downtrend: Bitcoin, Ethereum.
"""
```
## Disclaimer
We are sharing a considered model and results for academic purpose only,
not any financial advice or recommendations for real business or investment.
## Contacts
B. Pavlyshenko https://www.linkedin.com/in/bpavlyshenko
## References
Pavlyshenko B.M. Financial News Analytics Using Fine-Tuned Llama 2 GPT Model. arXiv preprint arXiv:2308.13032. 2023. Download PDF: https://arxiv.org/pdf/2308.13032.pdf
Pavlyshenko B.M. Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model. arXiv preprint arXiv:2309.04704. 2023. Download PDF: https://arxiv.org/pdf/2309.04704.pdf
Pavlyshenko, B.M. Bitcoin Price Predictive Modeling Using Expert Correction. 2019 XIth International Scientific and Practical Conference on Electronics and Information Technologies (ELIT), September 16 – 18, 2019 Lviv, Ukraine, pages: 163-167. Download PDF: https://arxiv.org/pdf/2201.02729 |
nttx/aa090887-f294-47c7-b00b-00fd23a59b1a | nttx | "2025-02-08T01:35:14Z" | 27 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T01:30:21Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa090887-f294-47c7-b00b-00fd23a59b1a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb754e8bb691189a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb754e8bb691189a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/aa090887-f294-47c7-b00b-00fd23a59b1a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 8
mlflow_experiment_name: /tmp/fb754e8bb691189a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 24f7de68-1159-4c48-b0be-d915cf4e48ed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 24f7de68-1159-4c48-b0be-d915cf4e48ed
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aa090887-f294-47c7-b00b-00fd23a59b1a
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 265
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.0057 | 0.0038 | 1 | 3.5411 |
| 3.3286 | 0.3784 | 100 | 1.6186 |
| 2.8543 | 0.7569 | 200 | 1.2383 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AcerTest359/ggml_clip-vit-base-patch32 | AcerTest359 | "2025-01-10T10:06:02Z" | 247 | 0 | null | [
"gguf",
"clip",
"vision",
"ggml",
"clip.cpp",
"clip-cpp-gguf",
"license:mit",
"region:us"
] | null | "2025-01-10T10:06:02Z" | ---
license: mit
tags:
- clip
- vision
- ggml
- clip.cpp
- clip-cpp-gguf
---
## Converted files for use with clip.cpp
see https://github.com/monatis/clip.cpp
# Experimental
the file format is not stable yet, so expect breaking changes. I will update the files from time to time.
|
skarsa/babe_topic_subsamples_model_alpha_inf_idx_3 | skarsa | "2025-02-11T14:13:13Z" | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T20:24:16Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: babe_topic_subsamples_model_alpha_inf_idx_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babe_topic_subsamples_model_alpha_inf_idx_3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
JDB03/PPO-SnowballTarget | JDB03 | "2024-01-13T13:27:05Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-01-13T13:25:51Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JDB03/PPO-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
catcatcat3/PPO-LunarLander-v2 | catcatcat3 | "2023-12-04T17:38:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-03T11:22:21Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.32 +/- 18.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
prxy5604/1255d2a6-69ec-40d2-b0da-ff57b09a7a61 | prxy5604 | "2025-01-30T14:40:14Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-30T14:13:28Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1255d2a6-69ec-40d2-b0da-ff57b09a7a61
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e168c8f3714f4a58_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e168c8f3714f4a58_train_data.json
type:
field_instruction: question_body
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/1255d2a6-69ec-40d2-b0da-ff57b09a7a61
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e168c8f3714f4a58_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 617d6397-53f1-4f5e-bb1a-7f4c802130b6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 617d6397-53f1-4f5e-bb1a-7f4c802130b6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1255d2a6-69ec-40d2-b0da-ff57b09a7a61
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.6186 | 0.0103 | 1 | 5.2241 |
| 2.6053 | 0.5155 | 50 | 2.3891 |
| 2.4918 | 1.0309 | 100 | 2.2771 |
| 2.1446 | 1.5464 | 150 | 2.3156 |
| 1.8593 | 2.0619 | 200 | 2.2874 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aarnphm/llama-2-dolly-qlora | aarnphm | "2023-07-21T17:29:12Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-21T17:29:09Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Tahahah/ddpm-butterflies-128 | Tahahah | "2022-09-07T13:31:44Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2022-09-07T02:25:45Z" | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: /content/drive/Shareddrives/artGAN S2 2022/sugimori-artwork
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/drive/Shareddrives/artGAN S2 2022/sugimori-artwork` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tahahah/ddpm-butterflies-128/tensorboard?#scalars)
|
ntviet/whisper-small-hre5.1 | ntviet | "2025-02-22T00:44:14Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hre",
"dataset:ntviet/Hre-audio-dataset7",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-02-21T23:36:56Z" | ---
library_name: transformers
language:
- hre
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ntviet/Hre-audio-dataset7
model-index:
- name: Whisper Small Hre 5.1, ASR for male & female Hre voice, 1000 steps, metric
CER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hre 5.1, ASR for male & female Hre voice, 1000 steps, metric CER
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Hre audio dataset 7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0437
- Cer Ortho: 1.6227
- Cer: 0.8702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
| 0.1106 | 3.2362 | 1000 | 0.0437 | 1.6227 | 0.8702 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Prashasst/anime-recommendation-model | Prashasst | "2024-12-25T20:02:00Z" | 84 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2353",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-12-25T16:38:06Z" | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2353
- loss:CosineSimilarityLoss
widget:
- source_sentence: A year has passed since "The Black Rebellion" and the remaining
Black Knights have vanished into the shadows, their leader and figurehead, Zero,
executed by the Britannian Empire. Area 11 is once more squirming under the Emperors
oppressive heel as the Britannian armies concentrate their attacks on the European
front. But for the Britannians living in Area 11, life is back to normal. On one
such normal day, a Britannian student, skipping his classes in the Ashford Academy,
sneaks out to gamble on his chess play. But unknown to this young man, several
forces are eying him from the shadows, for soon, he will experience a shocking
encounter with his own obscured past, and the masked rebel mastermind Zero will
return.
sentences:
- Politics
- Mythology
- Disability
- source_sentence: 'In a land where corruption rules and a ruthless Prime Minister
has turned the puppet Emperors armies of soldiers, assassins and secret police
against the people, only one force dares to stand against them: Night Raid, an
elite team of relentless killers, each equipped with an Imperial Arm - legendary
weapons with unique and incredible powers created in the distant past.'
sentences:
- Kuudere
- Tragedy
- Seinen
- source_sentence: Theres a rumor about a mysterious phenomenon called "puberty syndrome."
For example, Sakuta Azusagawa is a high school student who suddenly sees a bunny
girl appear in front of him. The girl is actually a girl named Mai Sakurajima,
who is Sakutas upperclassman who is also a famous actress who has gone on hiatus
from the entertainment industry. For some reason, the people around Mai cannot
see her bunny-girl figure. Sakuta sets out to solve this mystery, and as he spends
time with Mai, he learns her secret feelings. Other heroines who have "puberty
syndrome" start to appear in front of Sakuta.
sentences:
- Heterosexual
- Drama
- Episodic
- source_sentence: Dororo, a young orphan thief, meets Hyakkimaru, a powerful ronin.
Hyakkimarus father, a greedy feudal lord, had made a pact with 12 demons, offering
his yet-unborn sons body parts in exchange for great power. Thus, Hyakkimaru -
who was born without arms, legs, eyes, ears, a nose or a mouth - was abandoned
in a river as a baby. Rescued and raised by Dr. Honma, who equips him with artificial
limbs and teaches him sword-fighting techniques, Hyakkimaru discovers that each
time he slays a demon, a piece of his body is restored. Now, he roams the war-torn
countryside in search of demons.
sentences:
- Urban
- Heterosexual
- Demons
- source_sentence: Everyone has a part of themselves they cannot show to anyone else.
sentences:
- Transgender
- Crime
- Comedy
model-index:
- name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: anime recommendation dev
type: anime-recommendation-dev
metrics:
- type: pearson_cosine
value: 0.6144532877889222
name: Pearson Cosine
- type: spearman_cosine
value: 0.6215240802205049
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: anime recommendation test
type: anime-recommendation-test
metrics:
- type: pearson_cosine
value: 0.6535704432727567
name: Pearson Cosine
- type: spearman_cosine
value: 0.6393952594394526
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Prashasst/anime-recommendation-model")
# Run inference
sentences = [
'I want anime like onepiece.',
'Pirates',
'Action',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `anime-recommendation-dev` and `anime-recommendation-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | anime-recommendation-dev | anime-recommendation-test |
|:--------------------|:-------------------------|:--------------------------|
| pearson_cosine | 0.6145 | 0.6536 |
| **spearman_cosine** | **0.6215** | **0.6394** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,353 training samples
* Columns: <code>description</code>, <code>genre</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | description | genre | label |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 15 tokens</li><li>mean: 97.39 tokens</li><li>max: 193 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.82 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.71</li><li>max: 1.0</li></ul> |
* Samples:
| description | genre | label |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------|:------------------|
| <code>Mitsuha Miyamizu, a high school girl, yearns to live the life of a boy in the bustling city of Tokyo—a dream that stands in stark contrast to her present life in the countryside. Meanwhile in the city, Taki Tachibana lives a busy life as a high school student while juggling his part-time job and hopes for a future in architecture.</code> | <code>Environmental</code> | <code>0.6</code> |
| <code>Jinta Yadomi and his group of childhood friends have become estranged after a tragic accident split them apart. Now in their high school years, a sudden surprise forces each of them to confront their guilt over what happened that day and come to terms with the ghosts of their past.</code> | <code>Hikikomori</code> | <code>0.79</code> |
| <code>The second season of <i>Ansatsu Kyoushitsu</i>.</code> | <code>Episodic</code> | <code>0.44</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 294 evaluation samples
* Columns: <code>description</code>, <code>genre</code>, and <code>label</code>
* Approximate statistics based on the first 294 samples:
| | description | genre | label |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 15 tokens</li><li>mean: 92.48 tokens</li><li>max: 193 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.73 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.69</li><li>max: 1.0</li></ul> |
* Samples:
| description | genre | label |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------|:------------------|
| <code>Summer is here, and the heroes of Class 1-A and 1-B are in for the toughest training camp of their lives A group of seasoned pros pushes everyones Quirks to new heights as the students face one overwhelming challenge after another. Braving the elements in this secret location becomes the least of their worries when routine training turns into a critical struggle for survival.</code> | <code>Transgender</code> | <code>0.2</code> |
| <code>"In order for something to be obtained, something of equal value must be lost."</code> | <code>Cyborg</code> | <code>0.72</code> |
| <code>In the story, Subaru Natsuki is an ordinary high school student who is lost in an alternate world, where he is rescued by a beautiful, silver-haired girl. He stays near her to return the favor, but the destiny she is burdened with is more than Subaru can imagine. Enemies attack one by one, and both of them are killed. He then finds out he has the power to rewind death, back to the time he first came to this world. But only he remembers what has happened since.</code> | <code>Primarily Female Cast</code> | <code>0.61</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | anime-recommendation-dev_spearman_cosine | anime-recommendation-test_spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|:----------------------------------------:|:-----------------------------------------:|
| 0.0068 | 1 | 0.3882 | - | - | - |
| 0.0135 | 2 | 0.2697 | - | - | - |
| 0.0203 | 3 | 0.2648 | - | - | - |
| 0.0270 | 4 | 0.3022 | - | - | - |
| 0.0338 | 5 | 0.2665 | - | - | - |
| 0.0405 | 6 | 0.2923 | - | - | - |
| 0.0473 | 7 | 0.3165 | - | - | - |
| 0.0541 | 8 | 0.2069 | - | - | - |
| 0.0608 | 9 | 0.271 | - | - | - |
| 0.0676 | 10 | 0.1974 | - | - | - |
| 0.0743 | 11 | 0.156 | - | - | - |
| 0.0811 | 12 | 0.1035 | - | - | - |
| 0.0878 | 13 | 0.1046 | - | - | - |
| 0.0946 | 14 | 0.0579 | - | - | - |
| 0.1014 | 15 | 0.0904 | - | - | - |
| 0.1081 | 16 | 0.0734 | - | - | - |
| 0.1149 | 17 | 0.0396 | - | - | - |
| 0.1216 | 18 | 0.0219 | - | - | - |
| 0.1284 | 19 | 0.0672 | - | - | - |
| 0.1351 | 20 | 0.0567 | - | - | - |
| 0.1419 | 21 | 0.0969 | - | - | - |
| 0.1486 | 22 | 0.0258 | - | - | - |
| 0.1554 | 23 | 0.1174 | - | - | - |
| 0.1622 | 24 | 0.0334 | - | - | - |
| 0.1689 | 25 | 0.0661 | - | - | - |
| 0.1757 | 26 | 0.0365 | - | - | - |
| 0.1824 | 27 | 0.049 | - | - | - |
| 0.1892 | 28 | 0.0889 | - | - | - |
| 0.1959 | 29 | 0.0179 | - | - | - |
| 0.2027 | 30 | 0.0255 | - | - | - |
| 0.2095 | 31 | 0.0312 | - | - | - |
| 0.2162 | 32 | 0.0312 | - | - | - |
| 0.2230 | 33 | 0.0619 | - | - | - |
| 0.2297 | 34 | 0.0358 | - | - | - |
| 0.2365 | 35 | 0.0468 | - | - | - |
| 0.2432 | 36 | 0.0601 | - | - | - |
| 0.25 | 37 | 0.0546 | - | - | - |
| 0.2568 | 38 | 0.0411 | - | - | - |
| 0.2635 | 39 | 0.0332 | - | - | - |
| 0.2703 | 40 | 0.0479 | - | - | - |
| 0.2770 | 41 | 0.0657 | - | - | - |
| 0.2838 | 42 | 0.0161 | - | - | - |
| 0.2905 | 43 | 0.0323 | - | - | - |
| 0.2973 | 44 | 0.0794 | - | - | - |
| 0.3041 | 45 | 0.0264 | - | - | - |
| 0.3108 | 46 | 0.0391 | - | - | - |
| 0.3176 | 47 | 0.0514 | - | - | - |
| 0.3243 | 48 | 0.0276 | - | - | - |
| 0.3311 | 49 | 0.0653 | - | - | - |
| 0.3378 | 50 | 0.0343 | - | - | - |
| 0.3446 | 51 | 0.0369 | - | - | - |
| 0.3514 | 52 | 0.0336 | - | - | - |
| 0.3581 | 53 | 0.0368 | - | - | - |
| 0.3649 | 54 | 0.0477 | - | - | - |
| 0.3716 | 55 | 0.0358 | - | - | - |
| 0.3784 | 56 | 0.0312 | - | - | - |
| 0.3851 | 57 | 0.0388 | - | - | - |
| 0.3919 | 58 | 0.0415 | - | - | - |
| 0.3986 | 59 | 0.02 | - | - | - |
| 0.4054 | 60 | 0.0459 | - | - | - |
| 0.4122 | 61 | 0.0302 | - | - | - |
| 0.4189 | 62 | 0.0519 | - | - | - |
| 0.4257 | 63 | 0.0283 | - | - | - |
| 0.4324 | 64 | 0.04 | - | - | - |
| 0.4392 | 65 | 0.0146 | - | - | - |
| 0.4459 | 66 | 0.033 | - | - | - |
| 0.4527 | 67 | 0.0365 | - | - | - |
| 0.4595 | 68 | 0.0579 | - | - | - |
| 0.4662 | 69 | 0.0253 | - | - | - |
| 0.4730 | 70 | 0.033 | - | - | - |
| 0.4797 | 71 | 0.0258 | - | - | - |
| 0.4865 | 72 | 0.0181 | - | - | - |
| 0.4932 | 73 | 0.0334 | - | - | - |
| 0.5 | 74 | 0.0415 | - | - | - |
| 0.5068 | 75 | 0.0258 | - | - | - |
| 0.5135 | 76 | 0.0304 | - | - | - |
| 0.5203 | 77 | 0.0211 | - | - | - |
| 0.5270 | 78 | 0.0334 | - | - | - |
| 0.5338 | 79 | 0.0278 | - | - | - |
| 0.5405 | 80 | 0.0209 | - | - | - |
| 0.5473 | 81 | 0.0391 | - | - | - |
| 0.5541 | 82 | 0.0274 | - | - | - |
| 0.5608 | 83 | 0.0213 | - | - | - |
| 0.5676 | 84 | 0.0293 | - | - | - |
| 0.5743 | 85 | 0.0205 | - | - | - |
| 0.5811 | 86 | 0.0258 | - | - | - |
| 0.5878 | 87 | 0.0262 | - | - | - |
| 0.5946 | 88 | 0.0109 | - | - | - |
| 0.6014 | 89 | 0.0268 | - | - | - |
| 0.6081 | 90 | 0.0304 | - | - | - |
| 0.6149 | 91 | 0.0328 | - | - | - |
| 0.6216 | 92 | 0.0173 | - | - | - |
| 0.6284 | 93 | 0.0253 | - | - | - |
| 0.6351 | 94 | 0.0245 | - | - | - |
| 0.6419 | 95 | 0.0232 | - | - | - |
| 0.6486 | 96 | 0.0309 | - | - | - |
| 0.6554 | 97 | 0.0209 | - | - | - |
| 0.6622 | 98 | 0.0169 | - | - | - |
| 0.6689 | 99 | 0.024 | - | - | - |
| 0.6757 | 100 | 0.0166 | 0.0284 | 0.6215 | - |
| 0.6824 | 101 | 0.0202 | - | - | - |
| 0.6892 | 102 | 0.0181 | - | - | - |
| 0.6959 | 103 | 0.0413 | - | - | - |
| 0.7027 | 104 | 0.0537 | - | - | - |
| 0.7095 | 105 | 0.0241 | - | - | - |
| 0.7162 | 106 | 0.0199 | - | - | - |
| 0.7230 | 107 | 0.0227 | - | - | - |
| 0.7297 | 108 | 0.0283 | - | - | - |
| 0.7365 | 109 | 0.0372 | - | - | - |
| 0.7432 | 110 | 0.0193 | - | - | - |
| 0.75 | 111 | 0.0147 | - | - | - |
| 0.7568 | 112 | 0.0594 | - | - | - |
| 0.7635 | 113 | 0.0185 | - | - | - |
| 0.7703 | 114 | 0.0674 | - | - | - |
| 0.7770 | 115 | 0.0212 | - | - | - |
| 0.7838 | 116 | 0.0268 | - | - | - |
| 0.7905 | 117 | 0.0233 | - | - | - |
| 0.7973 | 118 | 0.0276 | - | - | - |
| 0.8041 | 119 | 0.0242 | - | - | - |
| 0.8108 | 120 | 0.034 | - | - | - |
| 0.8176 | 121 | 0.0231 | - | - | - |
| 0.8243 | 122 | 0.0252 | - | - | - |
| 0.8311 | 123 | 0.0294 | - | - | - |
| 0.8378 | 124 | 0.0205 | - | - | - |
| 0.8446 | 125 | 0.0302 | - | - | - |
| 0.8514 | 126 | 0.0468 | - | - | - |
| 0.8581 | 127 | 0.0311 | - | - | - |
| 0.8649 | 128 | 0.0365 | - | - | - |
| 0.8716 | 129 | 0.0257 | - | - | - |
| 0.8784 | 130 | 0.0339 | - | - | - |
| 0.8851 | 131 | 0.0359 | - | - | - |
| 0.8919 | 132 | 0.0404 | - | - | - |
| 0.8986 | 133 | 0.0223 | - | - | - |
| 0.9054 | 134 | 0.0232 | - | - | - |
| 0.9122 | 135 | 0.0295 | - | - | - |
| 0.9189 | 136 | 0.0244 | - | - | - |
| 0.9257 | 137 | 0.0168 | - | - | - |
| 0.9324 | 138 | 0.0319 | - | - | - |
| 0.9392 | 139 | 0.0328 | - | - | - |
| 0.9459 | 140 | 0.0295 | - | - | - |
| 0.9527 | 141 | 0.0262 | - | - | - |
| 0.9595 | 142 | 0.0238 | - | - | - |
| 0.9662 | 143 | 0.0181 | - | - | - |
| 0.9730 | 144 | 0.017 | - | - | - |
| 0.9797 | 145 | 0.0244 | - | - | - |
| 0.9865 | 146 | 0.0264 | - | - | - |
| 0.9932 | 147 | 0.0194 | - | - | - |
| 1.0 | 148 | 0.0028 | - | - | 0.6394 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
John6666/moefussion-xl-niaslx1-sdxl | John6666 | "2024-12-23T06:54:25Z" | 374 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"art",
"anime",
"stylized",
"cute",
"girls",
"en",
"base_model:JosefJilek/moeFussion",
"base_model:finetune:JosefJilek/moeFussion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-12-05T02:35:16Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- art
- anime
- stylized
- cute
- girls
base_model: JosefJilek/moeFussion
---
Original model is [here](https://huggingface.co/JosefJilek/moeFussion) and on [Civitai](https://civitai.com/models/1007733?modelVersionId=1129653).
> Moe Fussion focuses on providing wide spectrum of models suitable for different purposes. You are free to use these models however you want to what SDXL License allows, however I will be happy if you credit me or at least give me an donate https://www.buymeacoffee.com/jilek772003 Please read a rest to know how to use these models.
The author is [here](https://huggingface.co/JosefJilek)
This model created by [jilek77](https://civitai.com/user/jilek77). |
mindchain/llama2-qlora-finetunined-french_aktuell | mindchain | "2023-09-10T18:05:02Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-10T18:04:55Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
shibajustfor/a4f49f3e-45d1-4400-870b-b4c5ed62e7b4 | shibajustfor | "2025-01-30T23:34:41Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-30T23:33:35Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4f49f3e-45d1-4400-870b-b4c5ed62e7b4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 713c720a9255088e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/713c720a9255088e_train_data.json
type:
field_input: Patient
field_instruction: Description
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/a4f49f3e-45d1-4400-870b-b4c5ed62e7b4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/713c720a9255088e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 32dab004-a1e1-4f28-ade7-3dcdd4b382d6
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 32dab004-a1e1-4f28-ade7-3dcdd4b382d6
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4f49f3e-45d1-4400-870b-b4c5ed62e7b4
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 3.1150 |
| 3.0663 | 0.0140 | 13 | 2.9760 |
| 2.8298 | 0.0281 | 26 | 2.8916 |
| 2.8062 | 0.0421 | 39 | 2.8634 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JaehyeokLee/20m_em_checkpoint_epoch_1_step_2200 | JaehyeokLee | "2025-02-24T03:58:17Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-24T03:00:55Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- Multi-Linguality: It can support more than 100 working languages.
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
**Some suggestions for retrieval pipeline in RAG:**
We recommend to use following pipeline: hybrid retrieval + re-ranking.
- Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
A classic example: using both embedding retrieval and the BM25 algorithm.
Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
- As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
## News:
- 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
- 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
## Specs
- Model
| Model Name | Dimension | Sequence Length | Introduction |
|:----:|:---:|:---:|:---:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
| [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
| [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
- Data
| Dataset | Introduction |
|:----:|:---:|
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
## FAQ
**1. Introduction for different retrieval methods**
- Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
- Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
- Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
**2. Comparison with BGE-v1.5 and other monolingual models**
BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
unlike most existing models that can only perform dense retrieval.
In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
and users can choose a model that suits their specific needs based on practical considerations,
such as whether to require multilingual or cross-language support, and whether to process long texts.
**3. How to use BGE-M3 in other projects?**
For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
Contributions from the community are welcome.
In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
**Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
). Thanks @jobergum.**
**4. How to fine-tune bge-M3 model?**
You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
to fine-tune the dense embedding.
Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
# {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.19554901123046875
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.0
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# 0.7797
# 0.4620
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
# 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
# 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
# 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
# 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
# }
```
## Evaluation
- Multilingual (Miracl dataset)

- Cross-lingual (MKQA dataset)

- Long Document Retrieval
- MLDR:

Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
covering 13 languages, including test set, validation set, and training set.
We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
We believe that this data will be helpful for the open-source community in training document retrieval models.
- NarritiveQA:

## Training
- Self-knowledge Distillation: combining multiple outputs from different
retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
- Efficient Batching: Improve the efficiency when fine-tuning on long text.
The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
- MCLS: A simple method to improve the performance on long text without fine-tuning.
If you have no enough resource to fine-tuning model with long text, the method is useful.
Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
**The fine-tuning codes and datasets will be open-sourced in the near future.**
## Acknowledgement
Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
norsu/taxi-v3 | norsu | "2024-02-26T16:37:04Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-26T16:36:34Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="norsu/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
li1999/clip | li1999 | "2023-04-06T03:45:17Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-04-06T03:45:17Z" | ---
license: bigscience-openrail-m
---
|
dbands/Qwen2-7b-Alpacha-merged_4bit | dbands | "2024-06-12T18:45:47Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2-7B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2-7B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-09T11:43:32Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
base_model: unsloth/Qwen2-7B-bnb-4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanPerkhun/does_the_patient_have_any_symptoms_bert_Last256 | IvanPerkhun | "2024-09-26T13:06:29Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-26T13:04:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Salmamoori/Bert-finetuned-toxic-comment-classification-v2 | Salmamoori | "2024-06-18T15:26:15Z" | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"Language",
"toxic-comment",
"Bert",
"PyTorch",
"Trainer",
"F1Score",
"HuggingFaceHub",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-18T15:11:06Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- Language
- toxic-comment
- Bert
- PyTorch
- Trainer
- F1Score
- HuggingFaceHub
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: Bert-finetuned-toxic-comment-classification-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-finetuned-toxic-comment-classification-v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the toxic-comment-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1438
- Accuracy: 0.965
- Recall: 0.7143
- Precision: 0.9375
- F1: 0.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2792 | 1.0 | 100 | 0.2226 | 0.96 | 0.6190 | 1.0 | 0.7647 |
| 0.154 | 2.0 | 200 | 0.1438 | 0.965 | 0.7143 | 0.9375 | 0.8108 |
| 0.0488 | 3.0 | 300 | 0.2012 | 0.965 | 0.9524 | 0.7692 | 0.8511 |
| 0.015 | 4.0 | 400 | 0.2588 | 0.955 | 0.7143 | 0.8333 | 0.7692 |
| 0.0035 | 5.0 | 500 | 0.2444 | 0.965 | 0.7619 | 0.8889 | 0.8205 |
| 0.0001 | 6.0 | 600 | 0.2524 | 0.965 | 0.7619 | 0.8889 | 0.8205 |
| 0.0001 | 7.0 | 700 | 0.2580 | 0.965 | 0.7619 | 0.8889 | 0.8205 |
| 0.0001 | 8.0 | 800 | 0.2621 | 0.965 | 0.7619 | 0.8889 | 0.8205 |
| 0.0001 | 9.0 | 900 | 0.2646 | 0.965 | 0.7619 | 0.8889 | 0.8205 |
| 0.0001 | 10.0 | 1000 | 0.2654 | 0.965 | 0.7619 | 0.8889 | 0.8205 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
areegtarek/llmcxr_mimic-4bit | areegtarek | "2024-04-18T12:47:59Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-18T12:45:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hweemiin/dqn-SpaceInvadersNoFrameskip-v4 | hweemiin | "2024-04-13T06:42:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-13T06:42:04Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 587.50 +/- 120.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hweemiin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hweemiin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hweemiin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bmistry4/q-Taxi-v3-optimised | bmistry4 | "2023-10-25T14:58:54Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-25T14:47:01Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-optimised
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bmistry4/q-Taxi-v3-optimised", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
11Spades/ppo-LunarLander-v2 | 11Spades | "2023-05-10T01:09:51Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-10T01:09:29Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.83 +/- 33.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jgriffi/pegasus-samsum | jgriffi | "2022-06-23T11:18:59Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-06-23T09:29:19Z" | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
MatrixIA/gemma-2b-FT-text-to-Sql | MatrixIA | "2024-04-02T10:47:23Z" | 11 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-26T12:18:23Z" | ---
{}
---
# Model Card for Model ID
- **Developed by:** TARIK IDRISSI - matrixIA
- **Model type:** LLM
- **Language(s) (NLP):** text to sql
- **Finetuned from model [optional]:** [google/Gemma-2b]
### Finetuning Data Data
-- Bird Dataset for text to sql + numberstation NumbersStation/NSText2SQL (only 100k rows)
|
yewo/yewo | yewo | "2024-05-26T08:58:01Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-21T08:54:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tr13/MobileViT_Food_80epoch | Tr13 | "2024-10-24T19:22:22Z" | 206 | 0 | transformers | [
"transformers",
"safetensors",
"mobilevit",
"image-classification",
"generated_from_trainer",
"base_model:apple/mobilevit-small",
"base_model:finetune:apple/mobilevit-small",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-10-24T10:01:55Z" | ---
library_name: transformers
license: other
base_model: apple/mobilevit-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MobileViT_Food_80epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MobileViT_Food_80epoch
This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7769
- Accuracy: 0.8053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 4.5902 | 0.9994 | 1183 | 4.5818 | 0.0286 |
| 4.2708 | 1.9996 | 2367 | 4.2247 | 0.1690 |
| 3.7077 | 2.9998 | 3551 | 3.5174 | 0.2602 |
| 3.271 | 4.0 | 4735 | 2.9216 | 0.3432 |
| 2.8193 | 4.9994 | 5918 | 2.4241 | 0.4276 |
| 2.4733 | 5.9996 | 7102 | 2.0284 | 0.5017 |
| 2.1674 | 6.9998 | 8286 | 1.7180 | 0.5674 |
| 1.9884 | 8.0 | 9470 | 1.5144 | 0.6122 |
| 1.7582 | 8.9994 | 10653 | 1.3711 | 0.6450 |
| 1.4781 | 9.9996 | 11837 | 1.2530 | 0.6689 |
| 1.6275 | 10.9998 | 13021 | 1.1598 | 0.6924 |
| 1.5292 | 12.0 | 14205 | 1.1260 | 0.7046 |
| 1.3675 | 12.9994 | 15388 | 1.0912 | 0.7122 |
| 1.3782 | 13.9996 | 16572 | 1.0276 | 0.7255 |
| 1.3084 | 14.9998 | 17756 | 1.0042 | 0.7345 |
| 1.1715 | 16.0 | 18940 | 0.9771 | 0.7427 |
| 1.2386 | 16.9994 | 20123 | 0.9601 | 0.7461 |
| 1.1787 | 17.9996 | 21307 | 0.9489 | 0.7472 |
| 1.1716 | 18.9998 | 22491 | 0.9360 | 0.7516 |
| 1.1363 | 20.0 | 23675 | 0.9129 | 0.7595 |
| 1.2677 | 20.9994 | 24858 | 0.9007 | 0.7633 |
| 1.2019 | 21.9996 | 26042 | 0.8869 | 0.7657 |
| 1.0633 | 22.9998 | 27226 | 0.8835 | 0.7656 |
| 1.0393 | 24.0 | 28410 | 0.8742 | 0.7693 |
| 0.9558 | 24.9994 | 29593 | 0.8704 | 0.7705 |
| 1.0596 | 25.9996 | 30777 | 0.8455 | 0.7764 |
| 1.0749 | 26.9998 | 31961 | 0.8431 | 0.7793 |
| 0.9913 | 28.0 | 33145 | 0.8332 | 0.7795 |
| 0.9477 | 28.9994 | 34328 | 0.8434 | 0.7777 |
| 0.9681 | 29.9996 | 35512 | 0.8215 | 0.7840 |
| 0.9356 | 30.9998 | 36696 | 0.8050 | 0.7888 |
| 0.806 | 32.0 | 37880 | 0.8152 | 0.7870 |
| 1.0011 | 32.9994 | 39063 | 0.8089 | 0.7843 |
| 0.9268 | 33.9996 | 40247 | 0.8018 | 0.7884 |
| 0.8209 | 34.9998 | 41431 | 0.8147 | 0.7876 |
| 0.8193 | 36.0 | 42615 | 0.8043 | 0.7893 |
| 0.8523 | 36.9994 | 43798 | 0.8014 | 0.7893 |
| 0.9134 | 37.9996 | 44982 | 0.7995 | 0.7895 |
| 0.9263 | 38.9998 | 46166 | 0.7928 | 0.7896 |
| 0.9393 | 40.0 | 47350 | 0.7951 | 0.7952 |
| 0.8028 | 40.9994 | 48533 | 0.7840 | 0.7967 |
| 0.8299 | 41.9996 | 49717 | 0.7994 | 0.7929 |
| 0.791 | 42.9998 | 50901 | 0.7873 | 0.7921 |
| 0.8739 | 44.0 | 52085 | 0.7869 | 0.7956 |
| 0.8777 | 44.9994 | 53268 | 0.7835 | 0.7952 |
| 0.8077 | 45.9996 | 54452 | 0.7815 | 0.7957 |
| 0.9119 | 46.9998 | 55636 | 0.7753 | 0.7984 |
| 0.9867 | 48.0 | 56820 | 0.7824 | 0.7969 |
| 0.8115 | 48.9994 | 58003 | 0.7852 | 0.7975 |
| 0.779 | 49.9996 | 59187 | 0.7815 | 0.7992 |
| 0.755 | 50.9998 | 60371 | 0.7796 | 0.8011 |
| 0.7529 | 52.0 | 61555 | 0.7739 | 0.8014 |
| 0.6878 | 52.9994 | 62738 | 0.7914 | 0.7989 |
| 0.744 | 53.9996 | 63922 | 0.7774 | 0.8002 |
| 0.7346 | 54.9998 | 65106 | 0.7679 | 0.8012 |
| 0.7672 | 56.0 | 66290 | 0.7696 | 0.7998 |
| 0.8018 | 56.9994 | 67473 | 0.7877 | 0.7987 |
| 0.7507 | 57.9996 | 68657 | 0.7903 | 0.7979 |
| 0.7632 | 58.9998 | 69841 | 0.7831 | 0.8010 |
| 0.7013 | 60.0 | 71025 | 0.7799 | 0.7985 |
| 0.7364 | 60.9994 | 72208 | 0.7527 | 0.8079 |
| 0.8036 | 61.9996 | 73392 | 0.7664 | 0.8010 |
| 0.74 | 62.9998 | 74576 | 0.7683 | 0.8022 |
| 0.6531 | 64.0 | 75760 | 0.7548 | 0.8021 |
| 0.7375 | 64.9994 | 76943 | 0.7623 | 0.8022 |
| 0.7228 | 65.9996 | 78127 | 0.7820 | 0.8028 |
| 0.7318 | 66.9998 | 79311 | 0.7625 | 0.8008 |
| 0.6529 | 68.0 | 80495 | 0.7693 | 0.8036 |
| 0.68 | 68.9994 | 81678 | 0.7371 | 0.8093 |
| 0.7396 | 69.9996 | 82862 | 0.7699 | 0.8040 |
| 0.7388 | 70.9998 | 84046 | 0.7596 | 0.8038 |
| 0.7135 | 72.0 | 85230 | 0.7607 | 0.8043 |
| 0.6667 | 72.9994 | 86413 | 0.7666 | 0.8034 |
| 0.6866 | 73.9996 | 87597 | 0.7640 | 0.8046 |
| 0.6601 | 74.9998 | 88781 | 0.7573 | 0.8037 |
| 0.7305 | 76.0 | 89965 | 0.7443 | 0.8094 |
| 0.7507 | 76.9994 | 91148 | 0.7636 | 0.8053 |
| 0.7073 | 77.9996 | 92332 | 0.7692 | 0.8033 |
| 0.688 | 78.9998 | 93516 | 0.7609 | 0.8044 |
| 0.6694 | 79.9493 | 94640 | 0.7769 | 0.8053 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/gte-base-i1-GGUF | mradermacher | "2025-03-18T07:55:46Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mteb",
"sentence-similarity",
"sentence-transformers",
"Sentence Transformers",
"en",
"base_model:thenlper/gte-base",
"base_model:quantized:thenlper/gte-base",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"feature-extraction"
] | sentence-similarity | "2025-03-18T07:47:15Z" | ---
base_model: thenlper/gte-base
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/thenlper/gte-base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gte-base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gte-base-i1-GGUF/resolve/main/gte-base.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
botenius/b592b3f5-6ba7-42e8-a9b6-2214d92f2c45 | botenius | "2025-02-08T21:06:01Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-08T20:05:37Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b592b3f5-6ba7-42e8-a9b6-2214d92f2c45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 45111e02370473e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/45111e02370473e3_train_data.json
type:
field_instruction: text
field_output: completion_a
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/b592b3f5-6ba7-42e8-a9b6-2214d92f2c45
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/45111e02370473e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: f8ea3310-4a2d-43f6-8630-c66c0845852e
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: f8ea3310-4a2d-43f6-8630-c66c0845852e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b592b3f5-6ba7-42e8-a9b6-2214d92f2c45
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7031 | 0.4103 | 500 | 0.7508 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fzzhang/mistral_gsm8k_s_prod_fullS | fzzhang | "2024-02-29T02:40:11Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-02-28T19:11:32Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral_gsm8k_s_prod_fullS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_gsm8k_s_prod_fullS
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
BrBlitz/Taxi-v3 | BrBlitz | "2023-12-11T09:49:30Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-11T09:49:29Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="BrBlitz/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
art0123/OneTrainer_art | art0123 | "2025-03-03T09:28:13Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2025-03-03T08:50:10Z" | ---
license: unknown
---
🚀 Обновил свою сборку OneTrainer. 🚀
Друзья, рад представить вам портативную версию OneTrainer!
Добавлены training_presets.V2 для Flux от нашего друга испанца.
Добавлен update_OneTrainer.bat для простого обновления OneTrainer до текущей версии.
🔧 Установка:
1. Распакуйте архив.
2. Запустите файл install.bat.
3. Для обновления запустите update_OneTrainer.bat.
⚠️ ВАЖНО!!! После установки НЕ перемещайте и НЕ переименовывайте папку с OneTrainer. Если это необходимо, вам нужно будет удалить venv и заново начать установку.
🛠 Требования: В системе должен быть установлен только Git.
💻 Запуск: Используйте файл start-ui.bat. |
VK246/IC_ver6F_coco_swin_gpt2_50B_1e | VK246 | "2023-08-18T16:03:56Z" | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"base_model:VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e",
"base_model:finetune:VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2023-08-18T13:43:39Z" | ---
base_model: VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
model-index:
- name: IC_ver6F_coco_swin_gpt2_50B_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6F_coco_swin_gpt2_50B_1e
This model is a fine-tuned version of [VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e](https://huggingface.co/VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7799
- Cider: 5.8986
- Rouge1: 42.1787
- Rouge2: 16.6289
- Rougel: 38.245
- Rougelsum: 38.236
- Bleu-1: 43.2152
- Bleu-2: 25.0563
- Bleu-3: 15.845
- Bleu-4: 10.5042
- Gen Len: 11.3063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cider | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 0.6972 | 0.34 | 1000 | 0.8128 | 5.8314 | 41.3992 | 16.1278 | 37.5675 | 37.5537 | 42.6637 | 24.5815 | 15.5018 | 10.2465 | 11.3063 |
| 0.7318 | 0.68 | 2000 | 0.7912 | 6.9716 | 41.8244 | 16.3282 | 37.9594 | 37.9525 | 42.7623 | 24.7305 | 15.6458 | 10.4067 | 11.3063 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
John6666/duchaiten-pony-classic-anime-v10-sdxl | John6666 | "2024-08-24T16:53:35Z" | 6,908 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"cartoon",
"furry",
"classic anime styles",
"80s-90s",
"pony",
"en",
"dataset:DucHaiten/Classic-Anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-24T16:48:45Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- cartoon
- furry
- classic anime styles
- 80s-90s
- pony
datasets: DucHaiten/Classic-Anime
---
Original model is [here](https://civitai.com/models/655978/duchaiten-ponyclassicanime?modelVersionId=733914). The author is [here](https://huggingface.co/DucHaiten).
|
nkgwh/mistralai_Mistral-7B-Instruct-v0.2-CITPRED-FULLTXT-TRUNC-3000 | nkgwh | "2024-03-14T18:29:24Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-14T18:26:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kariver/detr-resnet-101_adagrad_finetuned_food-roboflow | kariver | "2023-11-13T17:54:03Z" | 40 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/detr-resnet-101",
"base_model:finetune:facebook/detr-resnet-101",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-11-11T16:33:52Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-101
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: detr-resnet-101_adagrad_finetuned_food-roboflow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-101_adagrad_finetuned_food-roboflow
This model is a fine-tuned version of [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.708 | 0.77 | 50 | 6.6090 |
| 6.2622 | 1.54 | 100 | 6.3841 |
| 6.1083 | 2.31 | 150 | 6.3279 |
| 6.1302 | 3.08 | 200 | 6.1396 |
| 6.0668 | 3.85 | 250 | 6.1742 |
| 5.9788 | 4.62 | 300 | 6.1002 |
| 5.9065 | 5.38 | 350 | 6.0478 |
| 5.8597 | 6.15 | 400 | 5.9363 |
| 5.8188 | 6.92 | 450 | 5.9914 |
| 5.7599 | 7.69 | 500 | 5.8783 |
| 5.6732 | 8.46 | 550 | 5.9710 |
| 5.743 | 9.23 | 600 | 6.0130 |
| 5.6341 | 10.0 | 650 | 5.8789 |
| 5.6265 | 10.77 | 700 | 5.8644 |
| 5.7164 | 11.54 | 750 | 5.9142 |
| 5.6104 | 12.31 | 800 | 5.9677 |
| 5.6572 | 13.08 | 850 | 5.8372 |
| 5.7094 | 13.85 | 900 | 5.8269 |
| 5.7456 | 14.62 | 950 | 5.9027 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Subsets and Splits