modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-12 12:27:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 422
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-12 12:23:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ramazank2000/turkishReviews-ds-mini1 | ramazank2000 | "2023-02-13T03:25:23Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-02-10T04:01:00Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ramazank2000/turkishReviews-ds-mini1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ramazank2000/turkishReviews-ds-mini1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.1811
- Validation Loss: 9.2643
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2935 | 9.9865 | 0 |
| 9.6639 | 9.6408 | 1 |
| 9.1811 | 9.2643 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lhong4759/59214d39-8fb9-4623-b98f-99c2c62ca310 | lhong4759 | "2025-01-21T06:37:05Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T05:29:28Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 59214d39-8fb9-4623-b98f-99c2c62ca310
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e351821c3840236f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e351821c3840236f_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/59214d39-8fb9-4623-b98f-99c2c62ca310
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e351821c3840236f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 41504afd-6c03-4810-99dd-38ee63aa3253
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 41504afd-6c03-4810-99dd-38ee63aa3253
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 59214d39-8fb9-4623-b98f-99c2c62ca310
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.8717 | 0.0188 | 200 | 1.1610 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk/bdfc0909-826e-4def-a93b-44af5162839d | kostiantynk | "2025-01-28T07:13:29Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-28T07:09:38Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bdfc0909-826e-4def-a93b-44af5162839d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212b5b0a0acfb62a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212b5b0a0acfb62a_train_data.json
type:
field_input: en_speaker
field_instruction: tag
field_output: en_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/bdfc0909-826e-4def-a93b-44af5162839d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/212b5b0a0acfb62a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9cd78644-9218-48ad-b82e-ca305991d528
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9cd78644-9218-48ad-b82e-ca305991d528
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bdfc0909-826e-4def-a93b-44af5162839d
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 7.0571 |
| 4.7258 | 0.0023 | 13 | 3.1208 |
| 3.3041 | 0.0045 | 26 | 3.0685 |
| 3.1894 | 0.0068 | 39 | 3.0231 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sailoromkar/q-FrozenLake-v1-4x4-noSlippery | sailoromkar | "2023-07-28T14:29:41Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-28T14:26:31Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sailoromkar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maanasharma5/dialect-debiasing-LLama-3.2-3B-translated-pnlogmse-e1-r50_eval-n10.0 | maanasharma5 | "2025-04-07T17:44:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"region:us"
] | null | "2025-04-07T17:44:51Z" | ---
base_model: meta-llama/LLama-3.2-3B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mradermacher/JaskierMistral-7B-slerp-GGUF | mradermacher | "2024-12-29T01:08:13Z" | 74 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"chihoonlee10/T3Q-DPO-Mistral-7B",
"bardsai/jaskier-7b-dpo-v6.1",
"en",
"base_model:allknowingroger/JaskierMistral-7B-slerp",
"base_model:quantized:allknowingroger/JaskierMistral-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-29T00:23:17Z" | ---
base_model: allknowingroger/JaskierMistral-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- chihoonlee10/T3Q-DPO-Mistral-7B
- bardsai/jaskier-7b-dpo-v6.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allknowingroger/JaskierMistral-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/JaskierMistral-7B-slerp-GGUF/resolve/main/JaskierMistral-7B-slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ThisIsATest/160m-random-corruption-0-8 | ThisIsATest | "2025-03-19T03:30:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T03:29:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MN-12B-Lyra-v4a1-Old-GGUF | mradermacher | "2025-04-05T06:00:30Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/MN-12B-Lyra-v4a1-Old",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4a1-Old",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-05T04:43:16Z" | ---
base_model: Sao10K/MN-12B-Lyra-v4a1-Old
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/MN-12B-Lyra-v4a1-Old
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v4a1-Old-GGUF/resolve/main/MN-12B-Lyra-v4a1-Old.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Thanabordee/openthaigpt-1.0.0-7b-chat-Q6_K-GGUF | Thanabordee | "2024-04-08T03:43:29Z" | 1 | 0 | transformers | [
"transformers",
"gguf",
"openthaigpt",
"llama",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"th",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-08T03:42:27Z" | ---
language:
- th
- en
license: apache-2.0
library_name: transformers
tags:
- openthaigpt
- llama
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# Thanabordee/openthaigpt-1.0.0-7b-chat-Q6_K-GGUF
This model was converted to GGUF format from [`openthaigpt/openthaigpt-1.0.0-7b-chat`](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-7b-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-7b-chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Thanabordee/openthaigpt-1.0.0-7b-chat-Q6_K-GGUF --model openthaigpt-1.0.0-7b-chat.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Thanabordee/openthaigpt-1.0.0-7b-chat-Q6_K-GGUF --model openthaigpt-1.0.0-7b-chat.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openthaigpt-1.0.0-7b-chat.Q6_K.gguf -n 128
```
|
lesso07/048c47fa-88ea-4992-bf66-ae8e50cfad40 | lesso07 | "2025-01-10T19:03:20Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"region:us"
] | null | "2025-01-10T14:59:11Z" | ---
library_name: peft
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 048c47fa-88ea-4992-bf66-ae8e50cfad40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: true
chat_template: llama3
datasets:
- data_files:
- ea4b9b40db84f198_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ea4b9b40db84f198_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso07/048c47fa-88ea-4992-bf66-ae8e50cfad40
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/ea4b9b40db84f198_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 04651042-a779-4343-9025-d1a23af15c30
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 04651042-a779-4343-9025-d1a23af15c30
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 048c47fa-88ea-4992-bf66-ae8e50cfad40
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.5422 | 0.0001 | 1 | 12.2763 |
| 0.2557 | 0.0007 | 9 | 0.4532 |
| 0.1373 | 0.0014 | 18 | 0.1263 |
| 0.062 | 0.0022 | 27 | 0.0433 |
| 0.0013 | 0.0029 | 36 | 0.0399 |
| 0.0007 | 0.0036 | 45 | 0.0365 |
| 0.0059 | 0.0043 | 54 | 0.0253 |
| 0.0027 | 0.0050 | 63 | 0.0254 |
| 0.0309 | 0.0057 | 72 | 0.0293 |
| 0.0047 | 0.0065 | 81 | 0.0307 |
| 0.0044 | 0.0072 | 90 | 0.0276 |
| 0.0003 | 0.0079 | 99 | 0.0274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
evgsumina/rubert-finetuned-ner | evgsumina | "2024-11-13T18:01:10Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-11-10T11:14:22Z" | ---
library_name: transformers
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: rubert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-finetuned-ner
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1555
- Precision: 0.8890
- Recall: 0.9087
- F1: 0.8988
- Accuracy: 0.9590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0665 | 0.5 | 625 | 0.2322 | 0.8077 | 0.8335 | 0.8204 | 0.9336 |
| 0.1781 | 1.0 | 1250 | 0.1786 | 0.8379 | 0.8815 | 0.8592 | 0.9483 |
| 0.1083 | 1.5 | 1875 | 0.1828 | 0.8845 | 0.9043 | 0.8943 | 0.9568 |
| 0.0609 | 2.0 | 2500 | 0.1555 | 0.8890 | 0.9087 | 0.8988 | 0.9590 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
fedovtt/df4b6aad-36b4-47bf-9869-cc9755ced4f6 | fedovtt | "2025-01-24T05:34:05Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-24T04:32:53Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df4b6aad-36b4-47bf-9869-cc9755ced4f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- be25ce38282aeb5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be25ce38282aeb5a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/df4b6aad-36b4-47bf-9869-cc9755ced4f6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/be25ce38282aeb5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 14fba03c-c528-4737-ac1e-1f62f6edce20
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 14fba03c-c528-4737-ac1e-1f62f6edce20
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# df4b6aad-36b4-47bf-9869-cc9755ced4f6
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0003 | 10 | nan |
| 0.0 | 0.0005 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
twieland/LN_ja-en_helsinki | twieland | "2022-06-07T22:34:00Z" | 20 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-06-07T09:12:27Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LN_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LN_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.5108 | 0.02 | 2000 | 2.8405 |
| 2.2729 | 0.04 | 4000 | 2.7758 |
| 2.1673 | 0.06 | 6000 | 2.7098 |
| 2.0981 | 0.08 | 8000 | 2.6487 |
| 2.048 | 0.1 | 10000 | 2.7008 |
| 2.0077 | 0.12 | 12000 | 2.6614 |
| 1.9677 | 0.13 | 14000 | 2.6227 |
| 1.9445 | 0.15 | 16000 | 2.5895 |
| 1.9137 | 0.17 | 18000 | 2.5897 |
| 1.8911 | 0.19 | 20000 | 2.6771 |
| 1.8695 | 0.21 | 22000 | 2.6332 |
| 1.8479 | 0.23 | 24000 | 2.6130 |
| 1.8378 | 0.25 | 26000 | 2.6518 |
| 1.8191 | 0.27 | 28000 | 2.6401 |
| 1.8024 | 0.29 | 30000 | 2.6617 |
| 1.7933 | 0.31 | 32000 | 2.6705 |
| 1.7787 | 0.33 | 34000 | 2.6280 |
| 1.7661 | 0.35 | 36000 | 2.6911 |
| 1.7514 | 0.36 | 38000 | 2.6766 |
| 1.7444 | 0.38 | 40000 | 2.6996 |
| 1.7363 | 0.4 | 42000 | 2.6276 |
| 1.722 | 0.42 | 44000 | 2.6466 |
| 1.7177 | 0.44 | 46000 | 2.5937 |
| 1.7055 | 0.46 | 48000 | 2.6386 |
| 1.6956 | 0.48 | 50000 | 2.6794 |
| 1.6885 | 0.5 | 52000 | 2.7336 |
| 1.679 | 0.52 | 54000 | 2.7266 |
| 1.6715 | 0.54 | 56000 | 2.6945 |
| 1.6666 | 0.56 | 58000 | 2.7111 |
| 1.6599 | 0.58 | 60000 | 2.7205 |
| 1.6566 | 0.59 | 62000 | 2.7194 |
| 1.6481 | 0.61 | 64000 | 2.6582 |
| 1.6434 | 0.63 | 66000 | 2.6859 |
| 1.6315 | 0.65 | 68000 | 2.7058 |
| 1.6258 | 0.67 | 70000 | 2.7428 |
| 1.6189 | 0.69 | 72000 | 2.7411 |
| 1.6169 | 0.71 | 74000 | 2.7039 |
| 1.6087 | 0.73 | 76000 | 2.6844 |
| 1.6021 | 0.75 | 78000 | 2.6454 |
| 1.6034 | 0.77 | 80000 | 2.6596 |
| 1.5941 | 0.79 | 82000 | 2.6903 |
| 1.5862 | 0.81 | 84000 | 2.7099 |
| 1.5836 | 0.83 | 86000 | 2.6929 |
| 1.5827 | 0.84 | 88000 | 2.7181 |
| 1.5747 | 0.86 | 90000 | 2.6888 |
| 1.5678 | 0.88 | 92000 | 2.6662 |
| 1.5643 | 0.9 | 94000 | 2.6663 |
| 1.561 | 0.92 | 96000 | 2.6699 |
| 1.5565 | 0.94 | 98000 | 2.6667 |
| 1.5501 | 0.96 | 100000 | 2.6828 |
| 1.5476 | 0.98 | 102000 | 2.6531 |
| 1.5444 | 1.0 | 104000 | 2.6799 |
| 1.5057 | 1.02 | 106000 | 2.6525 |
| 1.5003 | 1.04 | 108000 | 2.6996 |
| 1.4996 | 1.06 | 110000 | 2.6649 |
| 1.4996 | 1.07 | 112000 | 2.6974 |
| 1.4966 | 1.09 | 114000 | 2.7594 |
| 1.4967 | 1.11 | 116000 | 2.6966 |
| 1.492 | 1.13 | 118000 | 2.6929 |
| 1.4923 | 1.15 | 120000 | 2.6522 |
| 1.4838 | 1.17 | 122000 | 2.6363 |
| 1.4839 | 1.19 | 124000 | 2.6849 |
| 1.4807 | 1.21 | 126000 | 2.6667 |
| 1.4778 | 1.23 | 128000 | 2.6684 |
| 1.4731 | 1.25 | 130000 | 2.6338 |
| 1.4727 | 1.27 | 132000 | 2.6093 |
| 1.4695 | 1.29 | 134000 | 2.6020 |
| 1.4656 | 1.3 | 136000 | 2.6341 |
| 1.4648 | 1.32 | 138000 | 2.6509 |
| 1.4578 | 1.34 | 140000 | 2.6807 |
| 1.4606 | 1.36 | 142000 | 2.6357 |
| 1.4529 | 1.38 | 144000 | 2.6404 |
| 1.4488 | 1.4 | 146000 | 2.6347 |
| 1.4442 | 1.42 | 148000 | 2.6058 |
| 1.4447 | 1.44 | 150000 | 2.6645 |
| 1.4432 | 1.46 | 152000 | 2.6070 |
| 1.437 | 1.48 | 154000 | 2.5987 |
| 1.4345 | 1.5 | 156000 | 2.6309 |
| 1.43 | 1.52 | 158000 | 2.5947 |
| 1.4301 | 1.54 | 160000 | 2.5938 |
| 1.4267 | 1.55 | 162000 | 2.6146 |
| 1.426 | 1.57 | 164000 | 2.6519 |
| 1.4193 | 1.59 | 166000 | 2.6163 |
| 1.416 | 1.61 | 168000 | 2.5793 |
| 1.4146 | 1.63 | 170000 | 2.6031 |
| 1.4091 | 1.65 | 172000 | 2.5826 |
| 1.4067 | 1.67 | 174000 | 2.5891 |
| 1.4081 | 1.69 | 176000 | 2.6006 |
| 1.4023 | 1.71 | 178000 | 2.5697 |
| 1.4003 | 1.73 | 180000 | 2.5633 |
| 1.3986 | 1.75 | 182000 | 2.5494 |
| 1.3924 | 1.77 | 184000 | 2.5577 |
| 1.3931 | 1.78 | 186000 | 2.5888 |
| 1.3851 | 1.8 | 188000 | 2.5716 |
| 1.3869 | 1.82 | 190000 | 2.5570 |
| 1.3825 | 1.84 | 192000 | 2.5702 |
| 1.3787 | 1.86 | 194000 | 2.5754 |
| 1.3738 | 1.88 | 196000 | 2.5901 |
| 1.3734 | 1.9 | 198000 | 2.5374 |
| 1.3693 | 1.92 | 200000 | 2.5897 |
| 1.3703 | 1.94 | 202000 | 2.5422 |
| 1.3685 | 1.96 | 204000 | 2.5825 |
| 1.3664 | 1.98 | 206000 | 2.5201 |
| 1.3607 | 2.0 | 208000 | 2.5733 |
| 1.3217 | 2.02 | 210000 | 2.5879 |
| 1.31 | 2.03 | 212000 | 2.5777 |
| 1.3125 | 2.05 | 214000 | 2.5724 |
| 1.3084 | 2.07 | 216000 | 2.5968 |
| 1.3087 | 2.09 | 218000 | 2.5976 |
| 1.3063 | 2.11 | 220000 | 2.5969 |
| 1.3057 | 2.13 | 222000 | 2.6353 |
| 1.3067 | 2.15 | 224000 | 2.6147 |
| 1.3013 | 2.17 | 226000 | 2.5897 |
| 1.3018 | 2.19 | 228000 | 2.5783 |
| 1.2968 | 2.21 | 230000 | 2.6172 |
| 1.2975 | 2.23 | 232000 | 2.6180 |
| 1.2946 | 2.25 | 234000 | 2.6192 |
| 1.299 | 2.26 | 236000 | 2.5895 |
| 1.2896 | 2.28 | 238000 | 2.5682 |
| 1.287 | 2.3 | 240000 | 2.5653 |
| 1.2902 | 2.32 | 242000 | 2.5501 |
| 1.2862 | 2.34 | 244000 | 2.5747 |
| 1.2841 | 2.36 | 246000 | 2.5654 |
| 1.2838 | 2.38 | 248000 | 2.5703 |
| 1.2813 | 2.4 | 250000 | 2.5919 |
| 1.2778 | 2.42 | 252000 | 2.5552 |
| 1.2821 | 2.44 | 254000 | 2.5603 |
| 1.2729 | 2.46 | 256000 | 2.5455 |
| 1.2718 | 2.48 | 258000 | 2.5688 |
| 1.2729 | 2.49 | 260000 | 2.5574 |
| 1.2699 | 2.51 | 262000 | 2.5468 |
| 1.2677 | 2.53 | 264000 | 2.5704 |
| 1.2647 | 2.55 | 266000 | 2.5665 |
| 1.2628 | 2.57 | 268000 | 2.5594 |
| 1.2636 | 2.59 | 270000 | 2.5426 |
| 1.2573 | 2.61 | 272000 | 2.5666 |
| 1.2576 | 2.63 | 274000 | 2.5580 |
| 1.2511 | 2.65 | 276000 | 2.5742 |
| 1.2513 | 2.67 | 278000 | 2.5646 |
| 1.2495 | 2.69 | 280000 | 2.5669 |
| 1.2472 | 2.71 | 282000 | 2.5700 |
| 1.2478 | 2.73 | 284000 | 2.5496 |
| 1.2471 | 2.74 | 286000 | 2.5335 |
| 1.2436 | 2.76 | 288000 | 2.5315 |
| 1.2411 | 2.78 | 290000 | 2.5302 |
| 1.2391 | 2.8 | 292000 | 2.5290 |
| 1.2352 | 2.82 | 294000 | 2.5303 |
| 1.2332 | 2.84 | 296000 | 2.5412 |
| 1.233 | 2.86 | 298000 | 2.5523 |
| 1.2298 | 2.88 | 300000 | 2.5524 |
| 1.2285 | 2.9 | 302000 | 2.5517 |
| 1.2297 | 2.92 | 304000 | 2.5419 |
| 1.2256 | 2.94 | 306000 | 2.5404 |
| 1.2239 | 2.96 | 308000 | 2.5390 |
| 1.2264 | 2.97 | 310000 | 2.5364 |
| 1.2259 | 2.99 | 312000 | 2.5382 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ThuyNT03/CS505-L2T_AP2_filter2_CSI-PhoBERT | ThuyNT03 | "2024-03-23T15:29:30Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-23T14:57:45Z" | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
model-index:
- name: CS505-L2T_AP2_filter2_CSI-PhoBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-L2T_AP2_filter2_CSI-PhoBERT
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 330 | 0.1753 |
| 0.2791 | 1.99 | 660 | 0.1593 |
| 0.2791 | 2.99 | 990 | 0.0991 |
| 0.1397 | 3.99 | 1320 | 0.0632 |
| 0.0685 | 4.98 | 1650 | 0.0173 |
| 0.0685 | 5.98 | 1980 | 0.0309 |
| 0.0394 | 6.98 | 2310 | 0.0055 |
| 0.0202 | 7.98 | 2640 | 0.0055 |
| 0.0202 | 8.97 | 2970 | 0.0057 |
| 0.0108 | 9.97 | 3300 | 0.0047 |
| 0.0098 | 10.97 | 3630 | 0.0042 |
| 0.0098 | 11.96 | 3960 | 0.0016 |
| 0.0039 | 12.96 | 4290 | 0.0025 |
| 0.0039 | 13.96 | 4620 | 0.0023 |
| 0.0039 | 14.95 | 4950 | 0.0010 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
seonggyun/noadapter_mask_screw | seonggyun | "2024-12-12T04:28:00Z" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-12-12T04:25:09Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wiweka24/gemma-psychiatrist-v0-fp16 | wiweka24 | "2024-07-01T17:45:46Z" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-01T17:37:05Z" | ---
base_model: unsloth/gemma-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
---
# Uploaded model
- **Developed by:** wiweka24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
timm/mambaout_base_tall_rw.sw_e500_in1k | timm | "2025-01-21T21:58:53Z" | 230 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"arxiv:2405.07992",
"license:apache-2.0",
"region:us"
] | image-classification | "2024-10-11T17:49:06Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mambaout_base_tall_rw.sw_e500_in1k
A MambaOut image classification model with `timm` specific architecture customizations. Trained on ImageNet-1k by Ross Wightman using Swin / ConvNeXt based recipe.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.5
- GMACs: 16.1
- Activations (M): 38.7
- Image size: train = 224 x 224, test = 288 x 288
- **Dataset:** ImageNet-1k
- **Papers:**
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
- MambaOut: Do We Really Need Mamba for Vision?: https://arxiv.org/abs/2405.07992
- **Original:** https://github.com/yuweihao/MambaOut
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mambaout_base_tall_rw.sw_e500_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mambaout_base_tall_rw.sw_e500_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 768])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mambaout_base_tall_rw.sw_e500_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 7, 7, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |img_size|top1 |top5 |param_count|
|---------------------------------------------------------------------------------------------------------------------|--------|------|------|-----------|
|[mambaout_base_plus_rw.sw_e150_r384_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_r384_in12k_ft_in1k)|384 |87.506|98.428|101.66 |
|[mambaout_base_plus_rw.sw_e150_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_in12k_ft_in1k)|288 |86.912|98.236|101.66 |
|[mambaout_base_plus_rw.sw_e150_in12k_ft_in1k](http://huggingface.co/timm/mambaout_base_plus_rw.sw_e150_in12k_ft_in1k)|224 |86.632|98.156|101.66 |
|[mambaout_base_tall_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_tall_rw.sw_e500_in1k) |288 |84.974|97.332|86.48 |
|[mambaout_base_wide_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_wide_rw.sw_e500_in1k) |288 |84.962|97.208|94.45 |
|[mambaout_base_short_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_short_rw.sw_e500_in1k) |288 |84.832|97.27 |88.83 |
|[mambaout_base.in1k](http://huggingface.co/timm/mambaout_base.in1k) |288 |84.72 |96.93 |84.81 |
|[mambaout_small_rw.sw_e450_in1k](http://huggingface.co/timm/mambaout_small_rw.sw_e450_in1k) |288 |84.598|97.098|48.5 |
|[mambaout_small.in1k](http://huggingface.co/timm/mambaout_small.in1k) |288 |84.5 |96.974|48.49 |
|[mambaout_base_wide_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_wide_rw.sw_e500_in1k) |224 |84.454|96.864|94.45 |
|[mambaout_base_tall_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_tall_rw.sw_e500_in1k) |224 |84.434|96.958|86.48 |
|[mambaout_base_short_rw.sw_e500_in1k](http://huggingface.co/timm/mambaout_base_short_rw.sw_e500_in1k) |224 |84.362|96.952|88.83 |
|[mambaout_base.in1k](http://huggingface.co/timm/mambaout_base.in1k) |224 |84.168|96.68 |84.81 |
|[mambaout_small.in1k](http://huggingface.co/timm/mambaout_small.in1k) |224 |84.086|96.63 |48.49 |
|[mambaout_small_rw.sw_e450_in1k](http://huggingface.co/timm/mambaout_small_rw.sw_e450_in1k) |224 |84.024|96.752|48.5 |
|[mambaout_tiny.in1k](http://huggingface.co/timm/mambaout_tiny.in1k) |288 |83.448|96.538|26.55 |
|[mambaout_tiny.in1k](http://huggingface.co/timm/mambaout_tiny.in1k) |224 |82.736|96.1 |26.55 |
|[mambaout_kobe.in1k](http://huggingface.co/timm/mambaout_kobe.in1k) |288 |81.054|95.718|9.14 |
|[mambaout_kobe.in1k](http://huggingface.co/timm/mambaout_kobe.in1k) |224 |79.986|94.986|9.14 |
|[mambaout_femto.in1k](http://huggingface.co/timm/mambaout_femto.in1k) |288 |79.848|95.14 |7.3 |
|[mambaout_femto.in1k](http://huggingface.co/timm/mambaout_femto.in1k) |224 |78.87 |94.408|7.3 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{yu2024mambaout,
title={MambaOut: Do We Really Need Mamba for Vision?},
author={Yu, Weihao and Wang, Xinchao},
journal={arXiv preprint arXiv:2405.07992},
year={2024}
}
```
|
tartspuppy/gpt2-wikitext2-full | tartspuppy | "2025-03-07T02:43:22Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | "2025-03-07T01:33:07Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2-full
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0418 | 1.0 | 549 | 3.9434 |
| 3.6533 | 2.0 | 1098 | 3.9445 |
| 3.5037 | 3.0 | 1647 | 3.9528 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
ClaudioItaly/FourFictionGemma-9 | ClaudioItaly | "2024-09-17T12:18:43Z" | 7 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:merge:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:ifable/gemma-2-Ifable-9B",
"base_model:merge:ifable/gemma-2-Ifable-9B",
"base_model:lemon07r/Gemma-2-Ataraxy-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-9B",
"base_model:nbeerbower/gemma2-gutenberg-9B",
"base_model:merge:nbeerbower/gemma2-gutenberg-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-17T11:53:08Z" | ---
base_model:
- UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- lemon07r/Gemma-2-Ataraxy-9B
- ifable/gemma-2-Ifable-9B
- nbeerbower/gemma2-gutenberg-9B
library_name: transformers
tags:
- mergekit
- merge
---
Modello GGUF https://huggingface.co/ClaudioItaly/FourFictionGemma-9-Q5_K_M-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I created a fusion of 4 Gemma2 models specializing in storytelling and fictional writing.
This pattern requires the GEMMA Instruct pattern.
Responds very well to Prompts
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [ifable/gemma-2-Ifable-9B](https://huggingface.co/ifable/gemma-2-Ifable-9B) as a base.
### Models Merged
The following models were included in the merge:
* [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3)
* [lemon07r/Gemma-2-Ataraxy-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B)
* [nbeerbower/gemma2-gutenberg-9B](https://huggingface.co/nbeerbower/gemma2-gutenberg-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/gemma2-gutenberg-9B
parameters:
density: 0.5
weight: 0.5
- model: lemon07r/Gemma-2-Ataraxy-9B
parameters:
density: 0.5
weight: 0.5
- model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: ifable/gemma-2-Ifable-9B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf | RichardErkhov | "2025-02-18T09:03:03Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-18T08:37:08Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3.2_1B_korean_v0.2_sft_by_aidx - GGUF
- Model creator: https://huggingface.co/SEOKDONG/
- Original model: https://huggingface.co/SEOKDONG/llama3.2_1B_korean_v0.2_sft_by_aidx/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q2_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q2_K.gguf) | Q2_K | 0.54GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K.gguf) | Q3_K | 0.64GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_0.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_0.gguf) | Q4_0 | 0.72GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_K.gguf) | Q4_K | 0.75GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_1.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q4_1.gguf) | Q4_1 | 0.77GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_0.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_0.gguf) | Q5_0 | 0.83GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_K.gguf) | Q5_K | 0.85GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_1.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q5_1.gguf) | Q5_1 | 0.89GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q6_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q6_K.gguf) | Q6_K | 0.95GB |
| [llama3.2_1B_korean_v0.2_sft_by_aidx.Q8_0.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_llama3.2_1B_korean_v0.2_sft_by_aidx-gguf/blob/main/llama3.2_1B_korean_v0.2_sft_by_aidx.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags:
- korean
- 1B
- ko
license: apache-2.0
language:
- ko
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: text-generation
---
### ⛱ 해당 모델은은 LlaMA3.2 1B를 Foundation 모델로 하는 한국어 및 한국의 다양한
### 문화에 적용할 수 있도록 하기 위해
### 개발 되었으며 자체 제작한 53개 영역의 한국어 데이터를 활용하여 한국 사회 가치와
### 문화를 이해하는 모델 입니다. Thanks for ktds✌
### V0.2 Epoch=2
# ❶ 모델 설명
- 모델명 및 주요기능:
해당 모델은은 LlaMA3.2 1B 모델을 기반으로 SFT 방식으로 파인튜닝된 모델입니다.
한국어와 한국의 다양한 문화적 맥락을 이해하도록 설계되었으며 ✨✨, 자체 제작한 53개 영역의 한국어
데이터를 활용해 한국 사회의 가치와 문화를 반영합니다.
주요 기능으로는 텍스트 생성, 대화 추론, 문서 요약, 질의응답, 감정 분석 및 자연어 처리 관련 다양한 작업을 지원하며,
활용 분야는 법률, 재무, 과학, 교육, 비즈니스, 문화 연구 등 다양한 분야에서 응용될 수 있습니다.
- 모델 아키텍처:
해당 모델은 LlaMA3.2 1B 모델을 기반으로, 파라미터 수는 10억 개(1B)로 구성된 고성능 언어 모델입니다.
이 모델은LlaMA3.2 1B를 파운데이션 모델로 삼아, SFT(지도 미세 조정) 방식을 통해 한국어와 한국 문화에 특화된 성능을 발휘하도록 훈련되었습니다.
LlaMA3.2 1B의 경량화된 구조는 빠른 추론 속도와 메모리 효율성을 보장하며, 다양한 자연어 처리 작업에 적합하게 최적화되어 있습니다.
이 아키텍처는 텍스트 생성, 질의응답, 문서 요약, 감정 분석과 같은 다양한 작업에서 탁월한 성능을 보여줍니다.
# ❷ 학습 데이터
- 해당 모델은은 자체 개발한 총 3.6GB 크기의 데이터를 바탕으로 학습되었습니다. 모두 233만 건의 QnA, 요약, 분류 등 데이터를 포함하며,
그 중 133만 건은 53개 영역의 객관식 문제로 구성되었습니다. 이 영역에는 한국사, 사회, 재무, 법률, 세무, 수학, 생물, 물리, 화학 등이 포함되며,
Chain of Thought 방식으로 학습되었습니다. 또한 130만 건의 주관식 문제는 한국사, 재무, 법률, 세무, 수학 등 38개 영역에 걸쳐 학습되었습니다.
학습 데이터 중 한국의 사회 가치와 인간의 감정을 이해하고 지시한 사항에 따라 출력할 수 있는 데이터를 학습하였습니다.
- 학습 Instruction Datasets Format:
<pre><code>{"prompt": "prompt text", "completion": "ideal generated text"}</code></pre>
# ❸ 사용 사례
해당 모델은 다양한 응용 분야에서 사용될 수 있습니다. 예를 들어:
- 교육 분야: 역사, 수학, 과학 등 다양한 학습 자료에 대한 질의응답 및 설명 생성.
- 비즈니스: 법률, 재무, 세무 관련 질의에 대한 답변 제공 및 문서 요약.
- 연구 및 문화: 한국 사회와 문화에 맞춘 자연어 처리 작업, 감정 분석, 문서 생성 및 번역.
- 고객 서비스: 사용자와의 대화 생성 및 맞춤형 응답 제공.
- 이 모델은 다양한 자연어 처리 작업에서 높은 활용도를 가집니다.
# ❹ 한계 ⛈⛈
- 해당 모델은 한국어와 한국 문화에 특화되어 있으나,
특정 영역(예: 최신 국제 자료, 전문 분야)의 데이터 부족으로 인해 다른 언어 또는
문화에 대한 응답의 정확성이 떨어질 수 있습니다.
또한, 복잡한 논리적 사고를 요구하는 문제에 대해 제한된 추론 능력을 보일 수 있으며,
편향된 데이터가 포함될 경우 편향된 응답이 생성될 가능성도 존재합니다.
# ❺ 사용 방법
<pre><code>
from transformers import AutoModel, AutoTokenizer
base_model = "SEOKDONG/llama3.2_1B_korean_v0.2_sft_by_aidx"
model = AutoModelForCausalLM.from_pretrained(base_LLM_model)
tokenizer = AutoTokenizer.from_pretrained(base_LLM_model)
from tqdm import tqdm
for i in tqdm(range(0,1)): #len(answer_list))):
input_text = """
당신은 AI 비서입니다. 다음 질문에 맞는 답변을 고르세요. 답변은 1,2,3,4 중에 하나만 선택하세요.
다음 중 탄수화물의 일종으로, 포도당의 이중 결합으로 구성되어 있으며 자주 사용되는 설탕의 성분은 무엇인가?
선택지: 1. 셀룰로오스 2. 자당 3. 녹말 4. 글리코겐 답변:"""
inputs = tokenizer(input_text, return_tensors="pt")
inputs = inputs.to("cuda:0")
# 3. 모델을 사용한 추론
with torch.no_grad():
outputs = model.generate(**inputs, max_length=1024,temperature=0.1, do_sample=True, repetition_penalty=1.10)
# 4. 결과 디코딩
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result.split("답변:")[1].strip())
결과: '2. 자당입니다. 자당은 포도당이 두 개의 이중 결합으로 이루어진 형태로 존재하며, 주로 설탕의 주요 성분으로 사용됩니다.
셀룰로오스는 단백질, 녹말은 식물성 단백질, 글리코겐은 지방과 관련된 단백질로, 모두 설탕의 성분이 아닙니다. 따라서 정답은 2입니다.'
</code></pre>
|
AyeshaNaeem123/model | AyeshaNaeem123 | "2024-11-30T11:44:19Z" | 75 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-30T11:41:08Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AyeshaNaeem123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sail-rvc/Comentarista_HeroTV_Live__Tiger___Bunny___RVC_V2__Harvest__-_100_Epochs_ | sail-rvc | "2023-07-14T07:20:48Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:20:32Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Comentarista_HeroTV_Live__Tiger___Bunny___RVC_V2__Harvest__-_100_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:20:48
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
tomekkorbak/angry_kilby | tomekkorbak | "2023-01-23T17:57:34Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2023-01-23T05:34:58Z" | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: angry_kilby
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# angry_kilby
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 25000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.00056},
'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1661599744},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 4096,
'prefix': '<|aligned|>'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0'},
'num_additional_tokens': 2,
'path_or_name': 'tomekkorbak/goofy_pasteur'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'angry_kilby',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 2535,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1661599744,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2wxpf9gn |
anupamtripathi/new_models | anupamtripathi | "2023-09-22T01:49:02Z" | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-08-31T09:11:57Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of BeaverTail Pastry bread
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
hrshtsharma2012/NL2SQL-Picard-final | hrshtsharma2012 | "2023-12-14T19:34:01Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-14T19:18:54Z" | # NL2SQL Picard Final
CS769: Advanced Natural Language Processing
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/)
This is our T5 model pre-trained for NL2SQL task fine-tuned on Spider using a schema serialization.
## Running the model
```python
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ...
```
|
Kokoutou/Miyagi_5 | Kokoutou | "2025-02-22T03:06:56Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-22T02:57:53Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
robiual-awal/6713ca35-6460-4d96-ad31-f4c3f8a988ff | robiual-awal | "2025-01-24T07:13:58Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"region:us"
] | null | "2025-01-24T05:49:07Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6713ca35-6460-4d96-ad31-f4c3f8a988ff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 131ce7396bb7c8cb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/131ce7396bb7c8cb_train_data.json
type:
field_instruction: path
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/6713ca35-6460-4d96-ad31-f4c3f8a988ff
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/131ce7396bb7c8cb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 57265dd5-f06f-4e8a-a3a7-a6dd172a3b46
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 57265dd5-f06f-4e8a-a3a7-a6dd172a3b46
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6713ca35-6460-4d96-ad31-f4c3f8a988ff
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phospho-app/so100_test7-27uf0d4pij | phospho-app | "2025-04-11T18:17:31Z" | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"replicate",
"region:us"
] | null | "2025-04-11T17:54:59Z" |
---
tags:
- phosphobot
- gr00t
- replicate
task_categories:
- robotics
---
# Gr00t Model - phospho Replication Pipeline
This model was trained using **phospho's Replicate pipeline** for **gr00t models**.
Training parameters:
- **Dataset**: [RASPIAUDIO/so100_test7](https://huggingface.co/datasets/RASPIAUDIO/so100_test7)
- **Wandb run URL**: https://wandb.ai/olivier-ros-ros-co/gr00t-replicate/runs/kriofrgd
- **Epochs**: 20
- **Batch size**: 64
- **Training steps**: 2735
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
🔗 **Explore on Replicate**: [Replicate](https://replicate.com/phospho-app/gr00t-policy)
|
filipealmeida/Mistral-7B-v0.1-sharded | filipealmeida | "2023-09-28T01:15:18Z" | 38 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"pretrained",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-28T01:07:58Z" | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
---
# Sharded version of Mistral-7B-v0.1
This is the sharded version of Mistral-7B-v0.1 so you can use it when you have limited CPU memory
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
Fitex77/PrimeBiome7 | Fitex77 | "2025-03-01T07:10:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-01T07:10:12Z" | [Prime Biome](https://primebiome.store/product/primebiome-gummies/) Reviews - Prime Biome is a probiotic and prebiotic supplement available in the form of gummies. Unlike traditional capsules or powders, the gummy format has been noted for its convenience and ease of use. According to the product’s official sources, Prime Biome gummies are formulated using natural ingredients, including a mix of probiotics, herbal extracts, and fibers. The supplement has been marketed as a potential aid for those looking to support their digestive system and maintain overall wellness.
## **[Click here to order from official website of Prime Biome](https://primebiome.store/product/prime-biome-new/)**
## What is a Prime Biome?
A biome is a large-scale ecological community characterized by its climate, flora, and fauna. Some Biome are considered "prime" because of their extraordinary contributions to the Earth's natural systems. These Biome support rich biodiversity, regulate atmospheric conditions, and contribute to processes like carbon sequestration, which helps mitigate climate change. Prime Biome include tropical rainforests, coral reefs, temperate forests, wetlands, and savannas, all of which provide essential services to the planet’s ecosystems and human populations.
## Key Characteristics of [Prime Biome](https://filmfreeway.com/primebiomereviews)
Biodiversity Hotspots: [Prime Biome](https://www.facebook.com/groups/primebiomereviewsexperiences) are often home to an astounding variety of species, many of which are endemic, meaning they can only be found in those specific environments. The Amazon Rainforest, for instance, is often referred to as the "lungs of the Earth," as it hosts approximately 10% of the world's known species, making it an irreplaceable hotspot for biodiversity.
Carbon Sequestration: Biome like tropical rainforests and temperate forests play a vital role in mitigating climate change. They absorb carbon dioxide (CO₂) from the atmosphere and store it in plant biomass, soil, and water, reducing the overall concentration of greenhouse gases that contribute to global warming. Coral reefs and wetlands also store significant amounts of carbon, acting as vital carbon sinks.
Ecosystem Services: [Prime Biome](https://filmfreeway.com/primebiomereviews) provide ecosystem services that directly impact human societies. These services include purifying air and water, regulating temperature, controlling floods, and providing food and resources for local communities. Wetlands, for example, act as natural water filters, while coral reefs protect coastlines from storm surges and erosion.
## Prime Terrestrial Biome
### Tropical Rainforests
Tropical rainforests, found near the equator, are perhaps the most iconic of Prime Biome. These lush, dense forests support unparalleled biodiversity, housing millions of species, including countless plants, insects, birds, and mammals. The Amazon Rainforest, often called the world's "biological treasure chest," stores a significant portion of the planet's carbon and plays a key role in regulating the global water cycle. Unfortunately, deforestation and land conversion for agriculture are rapidly diminishing the size of rainforests, posing a dire threat to their survival and the species that depend on them.
### Temperate Forests
Temperate forests are found in regions with moderate climates, such as parts of North America, Europe, and East Asia. These forests are home to large mammals, migratory birds, and diverse plant species. They also play a significant role in regulating the Earth's climate through carbon sequestration. However, urbanization, industrial development, and agriculture are contributing to deforestation in temperate regions, threatening both biodiversity and carbon storage capacity.
### Tundra
The tundra biome is located in the northernmost parts of the globe, characterized by its cold temperatures and minimal precipitation. Though it may appear barren, the tundra supports hardy species adapted to extreme conditions, such as Arctic foxes, polar bears, and migratory birds. The tundra plays a vital role in carbon storage, but as the climate warms, this fragile ecosystem is experiencing rapid thawing, releasing large amounts of stored carbon and further accelerating global warming.
## **[Click here to order from official website of Prime Biome](https://primebiome.store/product/prime-biome-new/)**
## Prime Aquatic Biome
### Coral Reefs
Coral reefs, often referred to as the “rainforests of the sea,” are one of the most biodiverse ecosystems on Earth. These ecosystems, which occur in tropical marine waters, support thousands of species of fish, invertebrates, and other marine life. They protect coastal areas from erosion, provide food and resources for millions of people, and contribute to the global carbon cycle. However, coral reefs are under severe threat due to rising ocean temperatures, ocean acidification, and pollution, leading to coral bleaching and ecosystem degradation.
### Wetlands and Marshes
Wetlands, including marshes, swamps, and bogs, are vital ecosystems for water purification, flood control, and carbon storage. They are home to numerous species of birds, amphibians, and aquatic plants. Wetlands act as buffers against storms and help recharge groundwater supplies. Despite their importance, wetlands are being drained for agriculture and urban development, leading to habitat loss and reduced water quality.
### The Impact of Human Activity on Prime Biome
Human activities have had a profound impact on [Prime Biome.](https://www.facebook.com/PrimeBiomereviewsusa/) Deforestation, land-use change, pollution, and climate change have all contributed to the degradation of these vital ecosystems. For instance, the Amazon Rainforest has experienced significant deforestation due to logging, cattle ranching, and agriculture. This not only threatens the biodiversity of the region but also diminishes its capacity to store carbon, accelerating the effects of climate change.
Climate change itself poses a major risk to Prime Biome. Rising temperatures, changing precipitation patterns, and extreme weather events are causing shifts in ecosystems and threatening species that are not able to adapt. The melting of the Arctic tundra, for example, is releasing vast amounts of stored carbon into the atmosphere, exacerbating global warming.
## Conservation and the Future of Prime Biome
Efforts to conserve [Prime Biome](https://www.facebook.com/groups/primebiomereviewsusa) are more urgent than ever. Conservation strategies focus on protecting large tracts of these ecosystems through the establishment of protected areas, promoting sustainable land use practices, and restoring degraded habitats. Global initiatives like the United Nations' Sustainable Development Goals (SDGs) and agreements such as the Paris Agreement aim to curb deforestation, reduce carbon emissions, and protect biodiversity.
In addition to governmental policies, local communities, indigenous groups, and environmental organizations play a crucial role in conservation efforts. Indigenous knowledge of sustainable land and resource management has proven valuable in preserving the health of Prime Biome, as seen in many community-led conservation projects.
## Conclusion
Prime Biome are the lifeblood of Earth's ecosystems, providing essential services that maintain biodiversity, regulate climate, and support human life. As the threats to these ecosystems intensify, it is imperative that we take swift and effective action to protect them. From the dense rainforests of the Amazon to the fragile coral reefs of the Pacific, these [Prime Biome](https://www.facebook.com/PrimeBiomeReviewsExperiences/) require our attention and care. By prioritizing conservation, reducing our environmental footprint, and fostering international cooperation, we can ensure that these vital ecosystems continue to thrive for generations to come.
## **[Click here to order from official website of Prime Biome](https://primebiome.store/product/prime-biome-new/)**
|
Yntec/Reliberate | Yntec | "2023-11-23T12:56:35Z" | 749 | 6 | diffusers | [
"diffusers",
"safetensors",
"General",
"Anime",
"Art",
"XpucT",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-30T21:42:33Z" | ---
license: cc-by-nc-nd-4.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Anime
- Art
- XpucT
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Reliberate
Original page: https://huggingface.co/philz1337/reliberate
Samples and prompt:


anthropomorphic pig Programmer with laptop, funny, colorfull
|
mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF | mradermacher | "2024-12-21T13:12:18Z" | 144 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Infermatic/14b-Qwen2.5-Infermatic-Crea-v1",
"base_model:quantized:Infermatic/14b-Qwen2.5-Infermatic-Crea-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-20T01:47:46Z" | ---
base_model: Infermatic/14b-Qwen2.5-Infermatic-Crea-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Infermatic/14b-Qwen2.5-Infermatic-Crea-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/14b-Qwen2.5-Infermatic-Crea-v1-i1-GGUF/resolve/main/14b-Qwen2.5-Infermatic-Crea-v1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Xu-Ouyang/pythia-14m-int4-step71000-GPTQ-wikitext2 | Xu-Ouyang | "2024-07-17T22:16:26Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-17T22:16:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sharanharsoor/pii-ner-model-gretel-2 | sharanharsoor | "2025-01-27T16:51:22Z" | 9 | 0 | null | [
"safetensors",
"distilbert",
"region:us"
] | null | "2025-01-27T16:45:01Z" | # PII NER Model
This model is trained to detect Personal Identifiable Information (PII) in text. It can identify various types of PII including:
- Names (first and last)
- Email addresses
- Phone numbers
- Social Security Numbers (SSN)
- Credit card numbers
- Medical record numbers
- Employee IDs
- Addresses
- And more
## Usage
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained('sharanharsoor/pii-ner-model-gretel-2')
tokenizer = AutoTokenizer.from_pretrained('sharanharsoor/pii-ner-model-gretel-2')
```
## Training Data
The model was trained on the Gretel PII dataset with custom modifications.
## Performance
The model achieves:
- F1 Score: 0.06810
- Precision: 0.0507
- Recall: 0.1033
- Accuracy: 0.6090
|
MaziyarPanahi/YamshadowInex12_T3qm7xpT3q | MaziyarPanahi | "2024-04-08T03:38:12Z" | 16 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/T3qm7xpT3q-7B",
"base_model:merge:automerger/T3qm7xpT3q-7B",
"base_model:automerger/YamshadowInex12-7B",
"base_model:merge:automerger/YamshadowInex12-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-08T03:24:41Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowInex12_T3qm7xpT3q
base_model:
- automerger/YamshadowInex12-7B
- automerger/T3qm7xpT3q-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowInex12_T3qm7xpT3q
YamshadowInex12_T3qm7xpT3q is a merge of the following models:
* [automerger/YamshadowInex12-7B](https://huggingface.co/automerger/YamshadowInex12-7B)
* [automerger/T3qm7xpT3q-7B](https://huggingface.co/automerger/T3qm7xpT3q-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowInex12_T3qm7xpT3q"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
isspek/roberta-base_ebola_mistral_5_2e-5_16_undersampling_0.1 | isspek | "2024-12-01T14:50:39Z" | 179 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-17T11:00:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TransferGraph/jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony | TransferGraph | "2024-02-27T16:59:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:jasonyim2/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:jasonyim2/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | "2024-02-27T16:59:55Z" | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: jasonyim2/distilbert-base-uncased-finetuned-emotion
model-index:
- name: jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.6366492146596858
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jasonyim2_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [jasonyim2/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/jasonyim2/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5581 | None | 0 |
| 0.5958 | 0.6911 | 0 |
| 0.5916 | 0.6362 | 1 |
| 0.6094 | 0.6146 | 2 |
| 0.6178 | 0.5849 | 3 |
| 0.6241 | 0.5629 | 4 |
| 0.6251 | 0.5452 | 5 |
| 0.6325 | 0.5254 | 6 |
| 0.6366 | 0.5095 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF | mradermacher | "2025-01-11T02:54:57Z" | 730 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:recursal/QRWKV6-32B-Instruct-Preview-v0.1",
"base_model:quantized:recursal/QRWKV6-32B-Instruct-Preview-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-10T22:25:07Z" | ---
base_model: recursal/QRWKV6-32B-Instruct-Preview-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 8.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 9.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 13.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 13.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 16.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 16.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.8 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 20.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 21.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 22.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 23.0 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.7 | |
| [GGUF](https://huggingface.co/mradermacher/QRWKV6-32B-Instruct-Preview-v0.1-i1-GGUF/resolve/main/QRWKV6-32B-Instruct-Preview-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 29.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nolanaatama/rccrtmnsthprkrvcv2450pchrys | nolanaatama | "2023-06-29T07:05:39Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-29T07:02:14Z" | ---
license: creativeml-openrail-m
---
|
freemindcore/eanovnty-esmlvnty-0413-s9000-full | freemindcore | "2023-04-14T08:38:37Z" | 0 | 0 | null | [
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-04-13T05:49:48Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### EANOVNTY-ESMLVNTY-0413-s9000-full 4600 steps w/ learning rate 1e-6
|
arthurmluz/ptt5-xlsumm-temario | arthurmluz | "2024-11-27T23:27:34Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"pt",
"base_model:arthurmluz/ptt5-xlsumm-30epochs",
"base_model:finetune:arthurmluz/ptt5-xlsumm-30epochs",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-09T15:02:32Z" | ---
license: mit
base_model: arthurmluz/ptt5-xlsumm-30epochs
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ptt5-xlsumm-temario
results: []
language:
- pt
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ptt5-xlsumm-temario
This model is a fine-tuned version of [arthurmluz/ptt5-xlsumm-30epochs](https://huggingface.co/arthurmluz/ptt5-xlsumm-30epochs) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4610
- Rouge1: 0.0891
- Rouge2: 0.0571
- Rougel: 0.0781
- Rougelsum: 0.0845
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 88 | 2.7016 | 0.0862 | 0.0326 | 0.0681 | 0.0805 | 19.0 |
| No log | 2.0 | 176 | 2.6413 | 0.0879 | 0.0389 | 0.0701 | 0.0828 | 19.0 |
| 2.9296 | 3.0 | 264 | 2.5893 | 0.0881 | 0.0438 | 0.0707 | 0.0827 | 19.0 |
| 2.9296 | 4.0 | 352 | 2.5650 | 0.0923 | 0.0479 | 0.0748 | 0.0871 | 19.0 |
| 2.646 | 5.0 | 440 | 2.5429 | 0.0885 | 0.0469 | 0.0732 | 0.0834 | 19.0 |
| 2.646 | 6.0 | 528 | 2.5247 | 0.088 | 0.0503 | 0.0739 | 0.0831 | 19.0 |
| 2.5072 | 7.0 | 616 | 2.5108 | 0.0891 | 0.0534 | 0.0769 | 0.0851 | 19.0 |
| 2.5072 | 8.0 | 704 | 2.5039 | 0.0884 | 0.0547 | 0.0764 | 0.0848 | 19.0 |
| 2.5072 | 9.0 | 792 | 2.4948 | 0.0864 | 0.0536 | 0.0751 | 0.083 | 19.0 |
| 2.4128 | 10.0 | 880 | 2.4836 | 0.0869 | 0.0546 | 0.076 | 0.0839 | 19.0 |
| 2.4128 | 11.0 | 968 | 2.4813 | 0.0866 | 0.0543 | 0.0764 | 0.0832 | 19.0 |
| 2.356 | 12.0 | 1056 | 2.4768 | 0.0864 | 0.0533 | 0.076 | 0.0828 | 19.0 |
| 2.356 | 13.0 | 1144 | 2.4728 | 0.0872 | 0.0556 | 0.0775 | 0.0838 | 19.0 |
| 2.2815 | 14.0 | 1232 | 2.4666 | 0.0877 | 0.0557 | 0.0774 | 0.0841 | 19.0 |
| 2.2815 | 15.0 | 1320 | 2.4667 | 0.0866 | 0.0552 | 0.0764 | 0.0829 | 19.0 |
| 2.2106 | 16.0 | 1408 | 2.4680 | 0.0869 | 0.0553 | 0.0772 | 0.0824 | 19.0 |
| 2.2106 | 17.0 | 1496 | 2.4647 | 0.0867 | 0.0553 | 0.0771 | 0.0828 | 19.0 |
| 2.2106 | 18.0 | 1584 | 2.4597 | 0.0875 | 0.0561 | 0.0777 | 0.0837 | 19.0 |
| 2.1809 | 19.0 | 1672 | 2.4601 | 0.0873 | 0.0557 | 0.0773 | 0.0833 | 19.0 |
| 2.1809 | 20.0 | 1760 | 2.4596 | 0.0873 | 0.0561 | 0.0773 | 0.0835 | 19.0 |
| 2.1541 | 21.0 | 1848 | 2.4592 | 0.0875 | 0.0561 | 0.0777 | 0.0837 | 19.0 |
| 2.1541 | 22.0 | 1936 | 2.4620 | 0.0869 | 0.0551 | 0.0768 | 0.0828 | 19.0 |
| 2.1442 | 23.0 | 2024 | 2.4621 | 0.0869 | 0.0551 | 0.0768 | 0.0828 | 19.0 |
| 2.1442 | 24.0 | 2112 | 2.4619 | 0.0868 | 0.0553 | 0.0768 | 0.0828 | 19.0 |
| 2.1071 | 25.0 | 2200 | 2.4613 | 0.0868 | 0.0553 | 0.0768 | 0.0828 | 19.0 |
| 2.1071 | 26.0 | 2288 | 2.4618 | 0.0873 | 0.0557 | 0.0768 | 0.0828 | 19.0 |
| 2.1071 | 27.0 | 2376 | 2.4607 | 0.0892 | 0.0575 | 0.0782 | 0.0847 | 19.0 |
| 2.08 | 28.0 | 2464 | 2.4606 | 0.0874 | 0.056 | 0.0769 | 0.083 | 19.0 |
| 2.08 | 29.0 | 2552 | 2.4616 | 0.0891 | 0.0571 | 0.0781 | 0.0845 | 19.0 |
| 2.1013 | 30.0 | 2640 | 2.4610 | 0.0891 | 0.0571 | 0.0781 | 0.0845 | 19.0 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1 |
sailoromkar/Reinforce-1 | sailoromkar | "2023-08-04T04:56:47Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-04T04:56:35Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
letitbe1/Taxi-v3 | letitbe1 | "2025-02-27T13:42:47Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-27T13:42:44Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="letitbe1/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akdeniz27/dqn-SpaceInvadersNoFrameskip-v4 | akdeniz27 | "2023-07-17T13:25:54Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-17T13:25:14Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 617.00 +/- 195.64
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga akdeniz27 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga akdeniz27 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga akdeniz27
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
akhbar/QwQ-32B-Preview-abliterated-4bit-128g-actorder_True-GPTQ | akhbar | "2024-12-01T03:27:32Z" | 18 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-12-01T00:39:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
texanrangee/ec526e98-975d-44f0-ba50-0f352084d16e | texanrangee | "2025-03-05T19:45:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-05T16:25:26Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
frank2030/llama3_chat_tune_gguf_q8_0 | frank2030 | "2024-07-02T21:24:29Z" | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-02T21:18:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** frank2030
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
t-vishnu/my_awesome_model1 | t-vishnu | "2024-05-16T21:29:50Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-16T21:07:38Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: t-vishnu/my_awesome_model1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t-vishnu/my_awesome_model1
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3294
- Validation Loss: 0.3083
- Train Accuracy: {'accuracy': 0.8761904761904762}
- Train Precision: {'precision': 0.9197572488199596}
- Train Recall: {'recall': 0.7728045325779037}
- Train F1 Score: {'f1': 0.8399014778325123}
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Recall | Train F1 Score | Epoch |
|:----------:|:---------------:|:--------------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----:|
| 0.3294 | 0.3083 | {'accuracy': 0.8761904761904762} | {'precision': 0.9197572488199596} | {'recall': 0.7728045325779037} | {'f1': 0.8399014778325123} | 0 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
pankajmathur/orca_mini_v5_8b_dpo | pankajmathur | "2024-12-29T23:50:27Z" | 39 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"en",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-30T19:00:11Z" | ---
language:
- en
license: llama3
library_name: transformers
pipeline_tag: text2text-generation
model-index:
- name: orca_mini_v5_8b_dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 48.96
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v5_8b_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v5_8b_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.48
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v5_8b_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v5_8b_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.94
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v5_8b_dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.51
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v5_8b_dpo
name: Open LLM Leaderboard
---
**Model Name: llama_3_orca_mini_v5_8b_dpo**
# llama_3_orca_mini_v5_8b trained with various DPO Datasets
<img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
<strong>
"Obsessed with GenAI's potential? So am I ! Let's create together 🚀 <a href="https://www.linkedin.com/in/pankajam" target="_blank">https://www.linkedin.com/in/pankajam</a>"
</strong>
<br>
### NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges.
I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model.
Dive in and innovate!
### Evaluation
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.78|
|AI2 Reasoning Challenge (25-Shot)|61.86|
|HellaSwag (10-Shot) |82.35|
|MMLU (5-Shot) |65.10|
|TruthfulQA (0-shot) |56.24|
|Winogrande (5-shot) |73.40|
|GSM8k (5-shot) |67.70|
### Example Usage
Here is the ChatML prompt format
```
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
```
Below shows a code example on how to use this model
```python
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v5_8b_dpo"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
This model is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
**Quants**
GGUF : Coming Soon
AWQ: Coming Soon
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__orca_mini_v5_8b_dpo)
| Metric |Value|
|-------------------|----:|
|Avg. |19.96|
|IFEval (0-Shot) |48.96|
|BBH (3-Shot) |29.61|
|MATH Lvl 5 (4-Shot)| 7.48|
|GPQA (0-shot) | 3.24|
|MuSR (0-shot) | 6.94|
|MMLU-PRO (5-shot) |23.51|
|
mav23/dolphin-llama-13b-GGUF | mav23 | "2024-11-23T17:38:06Z" | 78 | 0 | null | [
"gguf",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2024-11-23T16:15:44Z" | ---
license: other
task_categories:
- text-generation
model-index:
- name: dolphin-llama-13b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-llama-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-llama-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-llama-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.23
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-llama-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-llama-13b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 14.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-llama-13b
name: Open LLM Leaderboard
---
Dolphin 🐬
https://erichartford.com/dolphin
This model is based on llama1, so it is for non-commercial use only. Future versions will be trained on llama2 and other open models that are suitable for commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model compliant to any requests. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is an open source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
After uncensoring, deduping, and cleaning, our dataset consists of:
- 842,610 instructions of FLANv2 augmented with GPT-4 completions
- 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates.
Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA.
We also filtered out duplicates and cleaned the data.
## Training
We trained with the flan5m (gpt3.5 completions) dataset in its entirety for 3 epochs at a learning rate of 2e-5 before we stopped training to avoid overfit.
We trained with the flan1m (gpt4 completions) dataset in its entirety for 2.5 epochs at a learning rate of 1e-5 before we stopped training to avoid overfit.
It took about 600 hours to train on 8x H100s
We used a prompt format similar to Vicuna, but we added the SYSTEM: field.
Prompt format:
```
SYSTEM: {system}
USER: {prompt}
ASSISTANT:
```
Example:
```
SYSTEM: you are an expert marine biologist.
USER: Please list 10 ways that dolphins are superior to orcas.
ASSISTANT:
```
## Evaluation
Evaluation will be coming soon.
## Team
The core Dolphin Team includes:
- Eric "Faldore" Hartford
- Pankaj Mathur
- Rob "Rohan" O'Callahan
- Tom "TheBloke" Jobbins
## Gratitude
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to WingLian, NanoBit, Teknium for helpful advice
- Special thanks to EdenCoder and chirper.ai for mentorship and financial sponsorship.
- Special thanks to Kilkonie for his very valued mentorship.
- Thank you to Catto.
- Thank you to Nicolai Schleifer, financial sponsor.
- Thank you to Eric Fleming, financial sponsor.
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
- And biggest thanks and love to my wife Hope and my babies, who have endured two months of my absenteeism while working long days and nights on this project.
## Example Output
```
SYSTEM: you are an expert marine biologist.
USER: Please list 10 ways that dolphins are superior to orcas.
ASSISTANT:
```
1. Social structure: Dolphins live in complex social groups called pods, which consist of extended family units and can include hundreds of individuals. Orcas, on the other hand, typically form smaller, more temporary groupings known as clans or matrilineal kinship groups.
2. Communication: Dolphins have a highly developed system of vocalizations, body language, and even telepathy to communicate with each other for various purposes such as coordinating cooperative hunting strategies, sharing information about food sources, and maintaining social bonds. While orcas also exhibit some communication abilities, their methods are generally less sophisticated compared to those of dolphins.
3. Intelligence: Dolphins are widely regarded as one of the most intelligent animal species, showcasing advanced problem-solving skills, self-awareness, tool use (such as using sea sponges as tools), and learning capabilities. Some dolphin species have been observed demonstrating altruism and understanding of human emotions. In contrast, while orcas are also smart, they do not display the same level of cognitive complexity as dolphins.
4. Playfulness: Dolphins are well-known for their playful behavior, often engaging in acrobatic displays, chasing each other's tails, and playing with objects like balls or seaweed. This playful nature is thought to be essential for their social bonding and mental stimulation. Although orcas may occasionally interact playfully, it is not as prevalent or diverse as among dolphins.
5. Agility and speed: Dolphins are incredibly agile swimmers, capable of reaching speeds up to 30 miles per hour (48 km/h) when pursuing prey. They can change direction rapidly and maneuver through tight spaces with ease. While orcas are powerful predators and fast swimmers themselves, they cannot match the sheer acceleration and agility of dolphins.
6. Feeding habits: Both dolphins and orcas are carnivorous, feeding primarily on fish and cephalopods. However, dolphins employ a wider variety of hunting techniques, including cooperative hunting strategies involving herding, encircling, and driving schools of fish into shallow waters where they become disoriented and easier to catch. Orcas mainly rely on ambushing their prey from below and using their powerful jaws to strike quickly.
7. Reproduction: Female dolphins usually give birth to a single calf after a gestation period of around 12 months. The mother plays a crucial role in raising and nurturing her offspring until they become independent. On the other hand, female orcas tend to breed every three to five years, giving birth to multiple offspring at once, and relying heavily on their mothers for protection and teaching.
8. Vocal repertoire: Dolphins possess a wide range of sounds, including clicks, whistles, and body sounds, which allow them to convey different messages and establish unique identities within their pods. Orcas, too, produce vocalizations but lack the diversity found in dolphin calls.
9. Conservation status: Many dolphin species face threats due to habitat loss, pollution, overfishing, and climate change. As a result, several populations are considered vulnerable or endangered by the International Union for Conservation of Nature (IUCN). In comparison, orca conservation status varies depending on the region; however, no species is currently listed as critically endangered or facing extinction.
10. Adaptability: Dolphins have adapted to various aquatic environments, ranging from warm coastal waters to colder open oceans. Their versatility allows them to thrive in both tropical and temperate climates. Orcas, conversely, are adapted to specific habitats such as cooler coastal waters and are mostly found in the Northern Hemisphere.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-llama-13b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 48.6 |
| ARC (25-shot) | 55.55 |
| HellaSwag (10-shot) | 77.11 |
| MMLU (5-shot) | 52.16 |
| TruthfulQA (0-shot) | 52.23 |
| Winogrande (5-shot) | 69.93 |
| GSM8K (5-shot) | 14.4 |
| DROP (3-shot) | 18.83 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-llama-13b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.56|
|AI2 Reasoning Challenge (25-Shot)|55.55|
|HellaSwag (10-Shot) |77.11|
|MMLU (5-Shot) |52.16|
|TruthfulQA (0-shot) |52.23|
|Winogrande (5-shot) |69.93|
|GSM8k (5-shot) |14.40|
|
APLunch/a2c-PandaReachDense-v3 | APLunch | "2024-04-23T03:06:28Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-23T03:02:10Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
coffiee/ld23 | coffiee | "2025-02-23T18:30:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-23T18:29:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pcuenq/tiny-gemma-test3 | pcuenq | "2024-04-03T17:26:08Z" | 161 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-04-03T17:25:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Izzy-Viral-Videos-Lamesa/Full.izzy.viral.video.lamesa.izzy.tiktok.izzy.viral.video.by.Izzyviral | Izzy-Viral-Videos-Lamesa | "2025-02-15T15:09:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-15T14:48:12Z" | <a href="https://hd.poltulive.site/viral-videos/?v=Izzy-Viral-Videos-Lamesa"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a href="https://hd.poltulive.site/viral-videos/?v=Izzy-Viral-Videos-Lamesa">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> </br>
<a href="https://hd.poltulive.site/viral-videos/?v=Izzy-Viral-Videos-Lamesa">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a> </br>
|
Sagicc/whisper-small-sr-onnx | Sagicc | "2024-10-19T21:28:44Z" | 18 | 0 | transformers.js | [
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"sr",
"base_model:Sagicc/whisper-small-sr-yodas-v2",
"base_model:quantized:Sagicc/whisper-small-sr-yodas-v2",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2024-04-20T10:37:37Z" | ---
base_model: Sagicc/whisper-small-sr-yodas-v2
language:
- sr
library_name: transformers.js
license: mit
---
Fine-tunned Serbian Whisper medium to use it with Transformers.js
ONNX converted
[Sagicc/whisper-small-sr-yodas-v2](https://huggingface.co/Sagicc/whisper-small-sr-yodas-v2) with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
prithivMLmods/FastThink-0.5B-Tiny-abliterated | prithivMLmods | "2025-04-08T10:39:10Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"base_model:prithivMLmods/FastThink-0.5B-Tiny",
"base_model:finetune:prithivMLmods/FastThink-0.5B-Tiny",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T00:25:16Z" | ---
library_name: transformers
tags:
- text-generation-inference
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/FastThink-0.5B-Tiny
pipeline_tag: text-generation
---
# **FastThink-0.5B-Tiny-abliterated**
> FastThink-0.5B-Tiny-abliterated is a reasoning-focused model based on Qwen2.5. We have released a range of base language models and instruction-tuned language models, spanning from 0.5 billion to 72 billion parameters. Qwen2.5 introduces the following improvements over Qwen2; Significantly enhanced knowledge and greatly improved capabilities in coding and mathematics, thanks to specialized expert models in these domains. Major improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g., tables), and generating structured outputs, especially JSON. It is more resilient to diverse system prompts, enhancing role-play implementation and condition-setting for chatbots. Long-context support for up to 128K tokens and the ability to generate outputs up to 8K tokens.
- Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**Architecture**: Transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias, and tied word embeddings.
# **Quickstart with Transformer**
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/FastThink-0.5B-Tiny-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# **Dataset Preparation**
This script is designed to load, process, and combine multiple datasets into a single, standardized format suitable for training conversational AI models. The script uses the `datasets` library to load and manipulate the datasets, and the `chat_templates` library to standardize the conversation format.
## Example
```python
# Load the initial three datasets
dataset1 = load_dataset("PowerInfer/LONGCOT-Refine-500K", split="train")
dataset2 = load_dataset("amphora/QwQ-LongCoT-130K", split="train")
dataset3 = load_dataset("AI-MO/NuminaMath-CoT", split="train")
# Map conversation columns for all datasets
dataset1 = dataset1.map(add_conversations_column, batched=False)
dataset2 = dataset2.map(add_conversations_column_prompt_qwq, batched=False)
dataset3 = dataset3.map(add_conversations_column_prompt_solution, batched=False)
# Combine all datasets
combined_dataset = concatenate_datasets([dataset1, dataset2, dataset3])
# Standardize using the ShareGPT format
combined_dataset = standardize_sharegpt(combined_dataset)
# Initialize the tokenizer with a specific chat template
tokenizer = get_chat_template(tokenizer, chat_template="qwen-2.5")
# Apply formatting function to the combined dataset
combined_dataset = combined_dataset.map(formatting_prompts_func, batched=True)
# Print the first few examples to verify the output
print(combined_dataset[:50000])
```
# **Intended Use**
1. **Reasoning Tasks**: FastThink-0.5B-Tiny-abliterated is optimized for reasoning-focused applications, such as logical problem-solving, decision-making, and analytical workflows.
2. **Instruction Following**: Ideal for scenarios where precise adherence to instructions is required, including generating structured outputs like JSON or tables.
3. **Multilingual Support**: Suitable for use in multilingual environments, supporting over 29 languages, making it versatile for global applications.
4. **Coding and Mathematics**: Highly effective in tasks involving coding, debugging, or solving mathematical problems, leveraging expert domain knowledge.
5. **Role-play Scenarios**: Can simulate conversational agents or personas for role-playing, enhancing chatbot and virtual assistant implementations.
6. **Long-form Content Creation**: Designed to generate and manage long-form text (up to 8K tokens) while maintaining context, making it ideal for tasks like report writing or storytelling.
7. **Understanding and Processing Structured Data**: Efficient at interpreting and working with structured data, such as tables or hierarchical formats.
8. **Low-Resource Applications**: With a smaller parameter size (0.5B), it is well-suited for applications with limited computational resources or edge deployment.
# **Limitations**
1. **Limited Model Size**: As a 0.5B-parameter model, its reasoning and comprehension capabilities are less advanced compared to larger models, particularly for highly complex tasks.
2. **Contextual Limitations**: Although it supports a context length of up to 128K tokens, its ability to effectively utilize such a long context may vary, particularly in tasks requiring intricate cross-referencing of earlier inputs.
3. **Accuracy in Domain-Specific Tasks**: While capable in coding and mathematics, it may struggle with highly specialized or esoteric domain knowledge compared to models fine-tuned specifically for those areas.
4. **Ambiguity Handling**: May misinterpret vague or poorly structured prompts, leading to less accurate or unintended results.
5. **Long-Context Tradeoffs**: Generating or processing very long outputs (e.g., close to the 8K token limit) could result in decreased coherence or relevance toward the end.
6. **Multilingual Performance**: Although it supports 29 languages, its proficiency and fluency may vary across languages, with some underrepresented languages possibly seeing reduced performance.
7. **Resource-Intensive for Long Contexts**: Using its long-context capabilities (128K tokens) can be computationally demanding, requiring significant memory and processing power.
8. **Dependence on Fine-Tuning**: For highly specialized tasks or domains, additional fine-tuning may be necessary to achieve optimal performance. |
VERSIL91/3dcb101c-4c96-49bb-95bd-ef89413638a8 | VERSIL91 | "2025-01-16T22:29:45Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | "2025-01-16T22:29:40Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 29f8b8fa-baa3-4949-8a21-32a1cb26bc3a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e2d38772cb7622a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e2d38772cb7622a4_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: null
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e2d38772cb7622a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b38e5bf8-7611-49e3-81d6-fb9d71671857
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b38e5bf8-7611-49e3-81d6-fb9d71671857
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 29f8b8fa-baa3-4949-8a21-32a1cb26bc3a
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.4834 |
| 6.2777 | 0.0017 | 10 | 1.4167 |
| 5.2236 | 0.0033 | 20 | 1.2068 |
| 4.7047 | 0.0050 | 30 | 1.1702 |
| 4.4247 | 0.0066 | 40 | 1.1476 |
| 3.9303 | 0.0083 | 50 | 1.1446 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
spaly99/my-setfit-model-dataset-PG-OCR-3 | spaly99 | "2024-04-05T17:22:52Z" | 4 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"model-index",
"region:us"
] | text-classification | "2024-04-05T13:06:54Z" | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
- precision
- recall
- f1
widget:
- text: GMB Gambia
- text: ' end flyout 2 '
- text: 'Books
'
- text: Persistent
- text: Session
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.87325
name: Accuracy
- type: precision
value: 0.8566450970632156
name: Precision
- type: recall
value: 0.8871134020618556
name: Recall
- type: f1
value: 0.8716130665991391
name: F1
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| True | <ul><li>'-. Pepsi-Colacold beats any cola cold! '</li><li>"Use “Jemes! et : L lemen peeple wen't Lemon. “i720 ait? "</li><li>'Ifit happens once, it could happen again. soptacaceee tates | WOE ¥ 1800 774 5025. '</li></ul> |
| False | <ul><li>'ps-script'</li><li>'Make your bidder browser agnostic to access high-performing cookie alternative supply'</li><li>'International Students & Scholars'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy | Precision | Recall | F1 |
|:--------|:---------|:----------|:-------|:-------|
| **all** | 0.8732 | 0.8566 | 0.8871 | 0.8716 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("Books
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:-----|
| Word count | 1 | 8.4845 | 1060 |
| Label | Training Sample Count |
|:------|:----------------------|
| False | 7940 |
| True | 8060 |
### Framework Versions
- Python: 3.11.0
- SetFit: 1.0.3
- Sentence Transformers: 2.3.0
- Transformers: 4.37.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
HamAndCheese82/math-ocr-donut-v2 | HamAndCheese82 | "2024-04-24T15:37:14Z" | 11 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"validation_1",
"validation_2",
"validation_3",
"validation_4",
"validation_5",
"validation_6",
"validation_7",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-04-23T07:52:07Z" | ---
library_name: transformers
tags:
- validation_1
- validation_2
- validation_3
- validation_4
- validation_5
- validation_6
- validation_7
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huangyuyang/Qwen-7B-Chat-int8.flm | huangyuyang | "2023-08-16T09:51:01Z" | 0 | 4 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-08-16T09:06:35Z" | ---
license: apache-2.0
---
fastllm model for Qwen-7B-Chat-int8
Github address: https://github.com/ztxz16/fastllm |
sb3/qrdqn-Acrobot-v1 | sb3 | "2022-10-11T15:13:20Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-02T14:37:14Z" | ---
library_name: stable-baselines3
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: -67.30 +/- 6.97
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
---
# **QRDQN** Agent playing **Acrobot-v1**
This is a trained model of a **QRDQN** agent playing **Acrobot-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env Acrobot-v1 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env Acrobot-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env Acrobot-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env Acrobot-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 50000),
('exploration_final_eps', 0.1),
('exploration_fraction', 0.12),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 0.00063),
('learning_starts', 0),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256], n_quantiles=25)'),
('target_update_interval', 250),
('train_freq', 4),
('normalize', False)])
```
|
AlexKarap/CLMFormatterTest | AlexKarap | "2025-04-03T17:32:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-03T17:20:10Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlexKarap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lljllll2219/uk-mt5-base-xlsum-v1 | lljllll2219 | "2023-10-17T23:03:00Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"base_model:kravchenko/uk-mt5-base",
"base_model:finetune:kravchenko/uk-mt5-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2023-10-17T22:18:17Z" | ---
base_model: kravchenko/uk-mt5-base
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: uk-mt5-base-xlsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
config: ukrainian
split: validation
args: ukrainian
metrics:
- name: Rouge1
type: rouge
value: 3.8556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uk-mt5-base-xlsum
This model is a fine-tuned version of [kravchenko/uk-mt5-base](https://huggingface.co/kravchenko/uk-mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3660
- Rouge1: 3.8556
- Rouge2: 1.5556
- Rougel: 3.7833
- Rougelsum: 3.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 5.31 | 1.0 | 375 | 2.5055 | 2.3333 | 0.8 | 2.3143 | 2.3238 |
| 3.254 | 2.0 | 750 | 2.4034 | 3.5444 | 1.1111 | 3.5333 | 3.4833 |
| 2.9813 | 3.0 | 1125 | 2.3844 | 3.7278 | 1.4444 | 3.6889 | 3.6333 |
| 2.8117 | 4.0 | 1500 | 2.3785 | 3.3222 | 1.1111 | 3.2556 | 3.2167 |
| 2.681 | 5.0 | 1875 | 2.3671 | 4.1667 | 1.5556 | 4.0667 | 4.0444 |
| 2.5825 | 6.0 | 2250 | 2.3705 | 3.6889 | 1.5556 | 3.6 | 3.5333 |
| 2.5151 | 7.0 | 2625 | 2.3654 | 3.6889 | 1.5556 | 3.6 | 3.5333 |
| 2.4798 | 8.0 | 3000 | 2.3660 | 3.8556 | 1.5556 | 3.7833 | 3.6889 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
NghiBuine/phone-search-model | NghiBuine | "2024-12-12T03:09:02Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:63",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:keepitreal/vietnamese-sbert",
"base_model:finetune:keepitreal/vietnamese-sbert",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-12-12T03:06:48Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:63
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: keepitreal/vietnamese-sbert
widget:
- source_sentence: Samsung Galaxy S22 Ultra
sentences:
- Điện thoại camera 108MP
- Điện thoại RAM 12GB
- Điện thoại có zoom quang học 10x
- source_sentence: Google Pixel 8 Pro
sentences:
- Điện thoại có jack cắm tai nghe 3.5mm
- Điện thoại có bộ nhớ trong 256GB
- Điện thoại chụp ảnh đẹp
- source_sentence: Google Pixel 8
sentences:
- Điện thoại màn hình 120Hz
- Điện thoại giá rẻ
- Điện thoại Android mới nhất
- source_sentence: JBL Reflect Flow Pro
sentences:
- iPhone mới nhất
- Điện thoại màn hình cong
- Điện thoại có loa Harman Kardon
- source_sentence: Asus ROG Phone 7
sentences:
- Điện thoại có bút
- Điện thoại chơi game
- Điện thoại có đèn flash kép
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on keepitreal/vietnamese-sbert
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.2857142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5714285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09523809523809523
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.05714285714285715
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05714285714285715
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2857142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5714285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.25679948860544627
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.1598639455782313
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.17696777071484332
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.42857142857142855
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5714285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7142857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14285714285714285
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1142857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07142857142857142
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.42857142857142855
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5714285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7142857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3358736991627618
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.21564625850340136
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.22075481533609612
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.2857142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5714285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.0
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.05714285714285715
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05714285714285715
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2857142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5714285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.22155623379830594
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.11564625850340135
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.13073998125841443
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.14285714285714285
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.2857142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.42857142857142855
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.047619047619047616
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.05714285714285715
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.042857142857142864
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.14285714285714285
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2857142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.42857142857142855
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.18057284162953233
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.10374149659863945
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.11943368484517551
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.14285714285714285
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.2857142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5714285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14285714285714285
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09523809523809523
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.05714285714285715
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05714285714285715
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.14285714285714285
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2857142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5714285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.32106066086016677
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.24801587301587302
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2591176744402551
name: Cosine Map@100
---
# SentenceTransformer based on keepitreal/vietnamese-sbert
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) <!-- at revision a9467ef2ef47caa6448edeabfd8e5e5ce0fa2a23 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("NghiBuine/phone-search-model")
# Run inference
sentences = [
'Asus ROG Phone 7',
'Điện thoại chơi game',
'Điện thoại có đèn flash kép',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1429 |
| cosine_accuracy@3 | 0.2857 | 0.4286 | 0.0 | 0.1429 | 0.2857 |
| cosine_accuracy@5 | 0.2857 | 0.5714 | 0.2857 | 0.2857 | 0.2857 |
| cosine_accuracy@10 | 0.5714 | 0.7143 | 0.5714 | 0.4286 | 0.5714 |
| cosine_precision@1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1429 |
| cosine_precision@3 | 0.0952 | 0.1429 | 0.0 | 0.0476 | 0.0952 |
| cosine_precision@5 | 0.0571 | 0.1143 | 0.0571 | 0.0571 | 0.0571 |
| cosine_precision@10 | 0.0571 | 0.0714 | 0.0571 | 0.0429 | 0.0571 |
| cosine_recall@1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1429 |
| cosine_recall@3 | 0.2857 | 0.4286 | 0.0 | 0.1429 | 0.2857 |
| cosine_recall@5 | 0.2857 | 0.5714 | 0.2857 | 0.2857 | 0.2857 |
| cosine_recall@10 | 0.5714 | 0.7143 | 0.5714 | 0.4286 | 0.5714 |
| **cosine_ndcg@10** | **0.2568** | **0.3359** | **0.2216** | **0.1806** | **0.3211** |
| cosine_mrr@10 | 0.1599 | 0.2156 | 0.1156 | 0.1037 | 0.248 |
| cosine_map@100 | 0.177 | 0.2208 | 0.1307 | 0.1194 | 0.2591 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 63 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 63 samples:
| | positive | anchor |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 6.9 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 7.86 tokens</li><li>max: 12 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------|:-----------------------------------------|
| <code>Google Pixel 8</code> | <code>Điện thoại Android mới nhất</code> |
| <code>Samsung Galaxy S22 Ultra</code> | <code>Điện thoại có sạc không dây</code> |
| <code>Samsung Galaxy Note 20 Ultra đi kèm bút S Pen</code> | <code>Điện thoại có bút</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `bf16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| **1.0** | **1** | **0.3188** | **0.3971** | **0.3073** | **0.1945** | **0.2442** |
| 2.0 | 2 | 0.3209 | 0.3886 | 0.2545 | 0.1838 | 0.3194 |
| 3.0 | 3 | 0.2542 | 0.3359 | 0.2391 | 0.1838 | 0.3211 |
| 4.0 | 4 | 0.2568 | 0.3359 | 0.2216 | 0.1806 | 0.3211 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
e7n/GHE-1 | e7n | "2024-09-04T23:53:31Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-04T07:27:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rebolforces/ppo-LunarLander-v2 | rebolforces | "2022-07-30T00:43:21Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-07-23T09:28:37Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.83 +/- 15.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
adfar/ppo-LunarLander-v2 | adfar | "2024-11-09T01:01:00Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-28T19:06:03Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 212.80 +/- 53.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
silent666/Qwen-Qwen1.5-7B-1717959018 | silent666 | "2024-06-09T18:57:53Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-09T18:50:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
amiraMamdouh/finetuned_MLM__English_French_German | amiraMamdouh | "2024-10-14T14:41:37Z" | 114 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-10-14T12:51:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
humbertofraga/x503_bert_model_10epochs | humbertofraga | "2024-06-27T20:20:20Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-01T11:39:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
silvacarl/bert-base-banking77-pt2 | silvacarl | "2023-09-21T00:25:53Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-21T00:19:29Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9278593521222447
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3038
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0233 | 1.0 | 626 | 0.7610 | 0.8462 |
| 0.365 | 2.0 | 1252 | 0.3689 | 0.9149 |
| 0.1685 | 3.0 | 1878 | 0.3038 | 0.9279 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
Niggendar/TorarazSinsation | Niggendar | "2025-01-14T20:29:52Z" | 59 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-01-14T20:19:31Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/E-Model-V1-i1-GGUF | mradermacher | "2025-03-30T06:12:17Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"chemistry",
"tr",
"dataset:BrewInteractive/alpaca-tr",
"dataset:ituperceptron/turkish_medical_reasoning",
"base_model:ErenalpCet/E-Model-V1",
"base_model:quantized:ErenalpCet/E-Model-V1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-30T01:56:47Z" | ---
base_model: ErenalpCet/E-Model-V1
datasets:
- BrewInteractive/alpaca-tr
- ituperceptron/turkish_medical_reasoning
language:
- tr
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- chemistry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ErenalpCet/E-Model-V1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/E-Model-V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q2_K.gguf) | i1-Q2_K | 2.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_1.gguf) | i1-Q4_1 | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q6_K.gguf) | i1-Q6_K | 6.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Sky-Maestro-7B-Sce-GGUF | mradermacher | "2025-02-21T09:32:06Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Sky-Maestro-7B-Sce",
"base_model:quantized:bunnycore/Sky-Maestro-7B-Sce",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-21T08:42:50Z" | ---
base_model: bunnycore/Sky-Maestro-7B-Sce
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Sky-Maestro-7B-Sce
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-Maestro-7B-Sce-GGUF/resolve/main/Sky-Maestro-7B-Sce.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tensorblock/Sailor-0.5B-Chat-GGUF | tensorblock | "2024-12-11T13:05:35Z" | 64 | 0 | null | [
"gguf",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"TensorBlock",
"GGUF",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"base_model:sail/Sailor-0.5B-Chat",
"base_model:quantized:sail/Sailor-0.5B-Chat",
"license:apache-2.0",
"region:us",
"conversational"
] | null | "2024-12-11T13:02:46Z" | ---
language:
- en
- zh
- id
- th
- vi
- ms
- lo
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
- TensorBlock
- GGUF
widget:
- text: 如何制作烤鱼?
example_title: Chinese
- text: How to bake fish?
example_title: English
- text: Bagaimana cara memanggang ikan?
example_title: Malay
- text: วิธีย่างปลา?
example_title: Thai
- text: Bagaimana membuat bakaran ikan?
example_title: Indonesian
- text: Làm thế nào để nướng cá?
example_title: Vietnamese
license: apache-2.0
base_model: sail/Sailor-0.5B-Chat
inference: false
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## sail/Sailor-0.5B-Chat - GGUF
This repo contains GGUF format model files for [sail/Sailor-0.5B-Chat](https://huggingface.co/sail/Sailor-0.5B-Chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>answer
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Sailor-0.5B-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q2_K.gguf) | Q2_K | 0.298 GB | smallest, significant quality loss - not recommended for most purposes |
| [Sailor-0.5B-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q3_K_S.gguf) | Q3_K_S | 0.333 GB | very small, high quality loss |
| [Sailor-0.5B-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q3_K_M.gguf) | Q3_K_M | 0.350 GB | very small, high quality loss |
| [Sailor-0.5B-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q3_K_L.gguf) | Q3_K_L | 0.364 GB | small, substantial quality loss |
| [Sailor-0.5B-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q4_0.gguf) | Q4_0 | 0.395 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Sailor-0.5B-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q4_K_S.gguf) | Q4_K_S | 0.397 GB | small, greater quality loss |
| [Sailor-0.5B-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q4_K_M.gguf) | Q4_K_M | 0.407 GB | medium, balanced quality - recommended |
| [Sailor-0.5B-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q5_0.gguf) | Q5_0 | 0.453 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Sailor-0.5B-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q5_K_S.gguf) | Q5_K_S | 0.453 GB | large, low quality loss - recommended |
| [Sailor-0.5B-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q5_K_M.gguf) | Q5_K_M | 0.459 GB | large, very low quality loss - recommended |
| [Sailor-0.5B-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q6_K.gguf) | Q6_K | 0.515 GB | very large, extremely low quality loss |
| [Sailor-0.5B-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/Sailor-0.5B-Chat-GGUF/blob/main/Sailor-0.5B-Chat-Q8_0.gguf) | Q8_0 | 0.665 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Sailor-0.5B-Chat-GGUF --include "Sailor-0.5B-Chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Sailor-0.5B-Chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
RylanSchaeffer/collapse_gemma-2-2b_hs2_accumulatesubsample_iter14_sftsd2 | RylanSchaeffer | "2024-10-01T14:21:04Z" | 6 | 0 | null | [
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"region:us"
] | null | "2024-10-01T14:18:14Z" | ---
license: gemma
base_model: google/gemma-2-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: collapse_gemma-2-2b_hs2_accumulatesubsample_iter14_sftsd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# collapse_gemma-2-2b_hs2_accumulatesubsample_iter14_sftsd2
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2046
- Num Input Tokens Seen: 4998392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.3909 | 0 |
| 1.4664 | 0.0531 | 5 | 1.2779 | 265768 |
| 1.0297 | 0.1062 | 10 | 1.2239 | 526776 |
| 0.9672 | 0.1594 | 15 | 1.2051 | 794288 |
| 0.9285 | 0.2125 | 20 | 1.2391 | 1063824 |
| 0.7632 | 0.2656 | 25 | 1.2306 | 1332408 |
| 0.7406 | 0.3187 | 30 | 1.2478 | 1595464 |
| 0.6883 | 0.3718 | 35 | 1.2507 | 1871024 |
| 0.5929 | 0.4250 | 40 | 1.2429 | 2133560 |
| 0.4589 | 0.4781 | 45 | 1.2391 | 2394480 |
| 0.6095 | 0.5312 | 50 | 1.2221 | 2663544 |
| 0.5181 | 0.5843 | 55 | 1.2246 | 2930064 |
| 0.4917 | 0.6375 | 60 | 1.2135 | 3199536 |
| 0.5105 | 0.6906 | 65 | 1.2249 | 3465264 |
| 0.4253 | 0.7437 | 70 | 1.2138 | 3727952 |
| 0.4506 | 0.7968 | 75 | 1.2148 | 3991304 |
| 0.4301 | 0.8499 | 80 | 1.2095 | 4255664 |
| 0.432 | 0.9031 | 85 | 1.2015 | 4523456 |
| 0.3698 | 0.9562 | 90 | 1.2208 | 4781552 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
AlignmentResearch/robust_llm_pythia-spam-160m-mz-ada-v3-s-2 | AlignmentResearch | "2024-03-20T10:23:08Z" | 110 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m-deduped",
"base_model:finetune:EleutherAI/pythia-160m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-20T10:22:39Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-160m-deduped
model-index:
- name: robust_llm_pythia-spam-160m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-spam-160m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
giannisan/Jett-w26 | giannisan | "2024-06-03T16:27:19Z" | 10 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:InferenceIllusionist/Excalibur-7b-DPO",
"base_model:finetune:InferenceIllusionist/Excalibur-7b-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-31T16:26:16Z" | ---
base_model:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- senseable/WestLake-7B-v2
- S-miguel/The-Trinity-Coder-7B
- yam-peleg/Experiment26-7B
- InferenceIllusionist/Excalibur-7b-DPO
- Kukedlc/Jupiter-k-7B-slerp
library_name: transformers
license: apache-2.0
language:
- en
---
# Jett-w26

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
### Models Merged
The following models were included in the merge:
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [S-miguel/The-Trinity-Coder-7B](https://huggingface.co/S-miguel/The-Trinity-Coder-7B)
* [InferenceIllusionist/Excalibur-7b-DPO](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO)
* [Kukedlc/Jupiter-k-7B-slerp](https://huggingface.co/Kukedlc/Jupiter-k-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: yam-peleg/Experiment26-7B
# No parameters necessary for base model
- model: Kukedlc/Jupiter-k-7B-slerp
parameters:
density: 0.58
weight: 0.25
- model: S-miguel/The-Trinity-Coder-7B
parameters:
density: 0.6
weight: 0.20
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
density: 0.6
weight: 0.20
- model: senseable/WestLake-7B-v2
parameters:
density: 0.56
weight: 0.20
- model: InferenceIllusionist/Excalibur-7b-DPO
parameters:
density: 0.58
weight: 0.15
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16
``` |
dbands/llama-3-8b-instruct-code-instructions-blender-16bit | dbands | "2024-05-20T16:29:06Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit",
"base_model:finetune:dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-05T16:37:31Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** dbands/llama-3-8b-instruct_code_instructions_122k_alpaca_style_4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/cookinai_-_DonutLM-v1-gguf | RichardErkhov | "2024-08-29T10:25:06Z" | 38 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-08-29T07:50:02Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DonutLM-v1 - GGUF
- Model creator: https://huggingface.co/cookinai/
- Original model: https://huggingface.co/cookinai/DonutLM-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DonutLM-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q2_K.gguf) | Q2_K | 2.53GB |
| [DonutLM-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [DonutLM-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [DonutLM-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [DonutLM-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [DonutLM-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q3_K.gguf) | Q3_K | 3.28GB |
| [DonutLM-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [DonutLM-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [DonutLM-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [DonutLM-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [DonutLM-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [DonutLM-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [DonutLM-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q4_K.gguf) | Q4_K | 4.07GB |
| [DonutLM-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [DonutLM-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [DonutLM-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [DonutLM-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [DonutLM-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q5_K.gguf) | Q5_K | 4.78GB |
| [DonutLM-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [DonutLM-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [DonutLM-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q6_K.gguf) | Q6_K | 5.53GB |
| [DonutLM-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/cookinai_-_DonutLM-v1-gguf/blob/main/DonutLM-v1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- merge
---
# Fixed
Heard alot in the commuity about jondurbin/bagel-dpo-7b-v0.1 and it sounds intresting.
Slerp Merge of AIDC-ai-business/Marcoroni-7B-v3 and jondurbin/bagel-dpo-7b-v0.1
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 32]
- model: jondurbin/bagel-dpo-7b-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
mradermacher/MarsEd_Fiance-GGUF | mradermacher | "2025-03-17T23:08:28Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:MarsEdward/MarsEd_Fiance",
"base_model:quantized:MarsEdward/MarsEd_Fiance",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-17T22:49:32Z" | ---
base_model: MarsEdward/MarsEd_Fiance
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MarsEdward/MarsEd_Fiance
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MarsEd_Fiance-GGUF/resolve/main/MarsEd_Fiance.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Rahul-G/MLMTokens_TAM-10epoch-MLMMul | Rahul-G | "2023-10-23T09:16:52Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-mlm-100-1280",
"base_model:finetune:FacebookAI/xlm-mlm-100-1280",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-23T09:13:28Z" | ---
license: cc-by-nc-4.0
base_model: xlm-mlm-100-1280
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MLMTokens_TAM-10epoch-MLMMul
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MLMTokens_TAM-10epoch-MLMMul
This model is a fine-tuned version of [xlm-mlm-100-1280](https://huggingface.co/xlm-mlm-100-1280) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9759
- Accuracy: 0.6949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 18 | 1.7429 | 0.7288 |
| No log | 2.0 | 36 | 1.2344 | 0.1017 |
| No log | 3.0 | 54 | 1.5760 | 0.0847 |
| No log | 4.0 | 72 | 1.7159 | 0.6441 |
| No log | 5.0 | 90 | 1.9759 | 0.6949 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 1.13.1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
DevQuasar/huihui-ai.granite-vision-3.2-2b-abliterated-GGUF | DevQuasar | "2025-03-11T16:22:19Z" | 0 | 0 | null | [
"gguf",
"image-text-to-text",
"base_model:huihui-ai/granite-vision-3.2-2b-abliterated",
"base_model:quantized:huihui-ai/granite-vision-3.2-2b-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | "2025-03-11T15:03:33Z" | ---
base_model:
- huihui-ai/granite-vision-3.2-2b-abliterated
pipeline_tag: image-text-to-text
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [huihui-ai/granite-vision-3.2-2b-abliterated](https://huggingface.co/huihui-ai/granite-vision-3.2-2b-abliterated)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> |
BogdanTurbal/model_bert_base_cased_d_political_bias_hate_bias_ep_1_2_a_sqn_a_b_p_100_5_v_12 | BogdanTurbal | "2024-08-12T13:59:13Z" | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:BogdanTurbal/model_bert_base_cased_d_political_bias_ep_1_sqn_a_p_100_v_12",
"base_model:finetune:BogdanTurbal/model_bert_base_cased_d_political_bias_ep_1_sqn_a_p_100_v_12",
"license:apache-2.0",
"region:us"
] | null | "2024-08-12T13:59:00Z" | ---
license: apache-2.0
base_model: BogdanTurbal/model_bert_base_cased_d_political_bias_ep_1_sqn_a_p_100_v_12
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_bert_base_cased_d_political_bias_hate_bias_ep_1_2_a_sqn_a_b_p_100_5_v_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_bert_base_cased_d_political_bias_hate_bias_ep_1_2_a_sqn_a_b_p_100_5_v_12
This model is a fine-tuned version of [BogdanTurbal/model_bert_base_cased_d_political_bias_ep_1_sqn_a_p_100_v_12](https://huggingface.co/BogdanTurbal/model_bert_base_cased_d_political_bias_ep_1_sqn_a_p_100_v_12) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8437
- Accuracy: 0.8117
- F1 Micro: 0.8117
- Auc: 0.8905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.6912 | 0.2632 | 10 | 0.5478 | 0.7434 | 0.7434 | 0.8215 |
| 0.5206 | 0.5263 | 20 | 0.6056 | 0.7212 | 0.7212 | 0.8657 |
| 0.5054 | 0.7895 | 30 | 0.4820 | 0.7829 | 0.7829 | 0.8788 |
| 0.5282 | 1.0526 | 40 | 0.7322 | 0.7393 | 0.7393 | 0.8694 |
| 0.2437 | 1.3158 | 50 | 0.5172 | 0.7919 | 0.7919 | 0.8783 |
| 0.1982 | 1.5789 | 60 | 0.5518 | 0.7788 | 0.7788 | 0.8862 |
| 0.152 | 1.8421 | 70 | 0.6158 | 0.8059 | 0.8059 | 0.8892 |
| 0.1548 | 2.1053 | 80 | 0.6161 | 0.8166 | 0.8166 | 0.8950 |
| 0.0364 | 2.3684 | 90 | 0.6929 | 0.8133 | 0.8133 | 0.8912 |
| 0.0412 | 2.6316 | 100 | 0.8156 | 0.8117 | 0.8117 | 0.8880 |
| 0.0175 | 2.8947 | 110 | 0.8437 | 0.8117 | 0.8117 | 0.8905 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
pushpraj21/pushpraj | pushpraj21 | "2025-01-19T13:35:51Z" | 386 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-19T12:53:47Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: pushpraj
---
# Pushpraj
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `pushpraj` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pushpraj21/pushpraj', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
anzeo/fine_tuned_rte_XLMroberta | anzeo | "2024-05-22T19:55:19Z" | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-22T19:51:15Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine_tuned_rte_XLMroberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_rte_XLMroberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4763
- Accuracy: 0.6207
- F1: 0.5951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.7117 | 1.7241 | 50 | 0.7129 | 0.4138 | 0.2422 |
| 0.7033 | 3.4483 | 100 | 0.6997 | 0.4138 | 0.2422 |
| 0.6845 | 5.1724 | 150 | 0.6933 | 0.4828 | 0.4828 |
| 0.6378 | 6.8966 | 200 | 0.8005 | 0.4828 | 0.4668 |
| 0.4579 | 8.6207 | 250 | 0.9656 | 0.6207 | 0.5951 |
| 0.2521 | 10.3448 | 300 | 1.2302 | 0.6552 | 0.6018 |
| 0.1196 | 12.0690 | 350 | 1.4679 | 0.5862 | 0.5789 |
| 0.0653 | 13.7931 | 400 | 1.4763 | 0.6207 | 0.5951 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
logasja/instagram-ginza | logasja | "2025-02-20T15:47:27Z" | 8 | 0 | keras | [
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] | image-to-image | "2025-02-17T17:47:33Z" | ---
library_name: keras
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
tags:
- adversarial
- aesthetic
- quality
- filter
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
license: gpl-3.0
base_model:
- vnet
- logasja/ArcFace
pipeline_tag: image-to-image
datasets:
- logasja/FDF
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/a12aef0a8ae82a31a052485a383c5d95)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
64,
128,
256,
512,
512
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 64,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_ArcFace": {
"d": "cosine_similarity",
"f": "ArcFace",
"name": "FEAT_ArcFace",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.1
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.9
},
"mean_squared_error": {
"name": "mean_squared_error",
"reduction": "sum_over_batch_size",
"weight": 0.1
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot
 |
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0530-epoch10-v7 | magnifi | "2024-05-30T17:54:56Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-30T17:18:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
playboy40k/flux-AimeeGarciaLora | playboy40k | "2025-02-01T12:02:54Z" | 353 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-01T12:01:49Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/FLUX.1-dev.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Aimee Garcia Flux
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/playboy40k/flux-AimeeGarciaLora/tree/main) them in the Files & versions tab.
|
damgomz/ft_bs64_lr6_base_x2 | damgomz | "2024-05-17T11:52:18Z" | 111 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-16T15:14:14Z" | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T13:52:15'
project_name: ft_bs64_lr6_base_x2_emissions_tracker
run_id: 064ee03d-8888-4a8d-b7cc-2fabb82586f2
duration: 14284.394858121872
emissions: 0.0093445332329461
emissions_rate: 6.54177746118027e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 0.1686347934653361
gpu_energy: 0
ram_energy: 0.0297587275415659
energy_consumed: 0.1983935210069022
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 3
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 14284.394858121872 |
| Emissions (Co2eq in kg) | 0.0093445332329461 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 0.1686347934653361 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0297587275415659 |
| Consumed energy (kWh) | 0.1983935210069022 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 3 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.027497460101884603 |
| Emissions (Co2eq in kg) | 0.005594721319431066 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs64_lr6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32580 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.509996 | 0.432674 | 0.800442 | 0.815951 |
| 1 | 0.378199 | 0.369624 | 0.834315 | 0.825153 |
| 2 | 0.330051 | 0.377433 | 0.835788 | 0.924847 |
| 3 | 0.294745 | 0.346708 | 0.849779 | 0.878834 |
| 4 | 0.271853 | 0.396932 | 0.829897 | 0.757669 |
| 5 | 0.212074 | 0.384297 | 0.846097 | 0.848160 |
|
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-mlx-v3 | ModelCloud | "2025-01-19T06:30:49Z" | 17 | 1 | null | [
"safetensors",
"qwen2",
"4-bit",
"region:us"
] | null | "2025-01-18T19:03:22Z" | This model was quantized and exported to mlx using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
## How to run this model
```shell
# install mlx
pip install mlx_lm
```
```python
from mlx_lm import load, generate
mlx_path = "ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
```
### Export gptq to mlx
```shell
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
```
```python
from gptqmodel import GPTQModel
# load gptq quantized model
gptq_model_path = "ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/QwQ-32B-Preview-gptqmodel-4bit-vortex-mlx-v3"
# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
``` |
utahnlp/yelp_polarity_t5-large_seed-3 | utahnlp | "2024-04-06T05:18:58Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-06T05:17:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
1024m/WASSA2024-3A-LLAMA3-7B-5Demo-lora | 1024m | "2024-05-28T23:58:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T21:18:29Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** 1024m
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RaoFoundation/774M-03_09_2024 | RaoFoundation | "2024-03-10T06:34:47Z" | 197 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-10T05:56:25Z" | ---
license: mit
datasets:
- tiiuae/falcon-refinedweb
---
This model was trained for Bittensor's pretraining subnet (https://github.com/RaoFoundation/pretraining) and, as of March 9th 2024, is the top model.
Bittensor subnet 9 rewards miners for producing pretrained Foundation-Models on the Falcon Refined Web dataset. It acts like a continuous benchmark whereby miners are rewarded for attaining the best losses on randomly sampled pages of Falcon given a consistent model architecture. The reward mechanism works as follows:
1. Miners train and periodically publish models to hugging face and commit the metadata for that model to the Bittensor chain.
2. Validators download the models from hugging face for each miner based on the Bittensor chain metadata and continuously evaluate them, setting weights based on the performance of each model against the Falcon dataset. They also log results to [wandb](https://wandb.ai/opentensor-dev/pretraining-subnet).
3. The Bittensor chain aggregates weights from all active validators using Yuma Consensus to determine the proportion of TAO emission rewarded to miners and validators.
Bittensor hosts multiple incentive mechanism through which miners are evaluated by validators for performing actions well. Validators perform the process of evaluation and 'set weights', which are transactions into Bittensor's blockchain. Each incentive mechanism in Bittensor is called a 'subnet' and has an identifier (This particular mechanism has subnet uid 9). Weights and the amount of TAO held by the validators become inputs to Bittensor's consensus mechanism called Yuma Consensus. YC drives validators towards a consensus, agreement about the value of the work done by miners. The miners with the highest agreed upon scores are minted TAO, the network digital currency.
Miners within this subnet are evaluated based on the number of times the model they have hosted has a lower loss than another model on the network when randomly sampling from the near infinite Falcon Refined Web pretraining dataset. To perform well, miners must attain the lowest loss on the largest number of random batches. Finding the best model and delta at the earliest block ensures the most incentive. |
naver-clova-ix/donut-base-finetuned-docvqa | naver-clova-ix | "2024-03-09T13:01:37Z" | 14,241 | 209 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"document-question-answering",
"arxiv:2111.15664",
"license:mit",
"endpoints_compatible",
"region:us"
] | document-question-answering | "2022-07-19T13:58:22Z" | ---
license: mit
pipeline_tag: document-question-answering
tags:
- donut
- image-to-text
- vision
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
---
# Donut (base-sized model, fine-tuned on DocVQA)
Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

## Intended uses & limitations
This model is fine-tuned on DocVQA, a document visual question answering dataset.
We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-15664,
author = {Geewook Kim and
Teakgyu Hong and
Moonbin Yim and
Jinyoung Park and
Jinyeong Yim and
Wonseok Hwang and
Sangdoo Yun and
Dongyoon Han and
Seunghyun Park},
title = {Donut: Document Understanding Transformer without {OCR}},
journal = {CoRR},
volume = {abs/2111.15664},
year = {2021},
url = {https://arxiv.org/abs/2111.15664},
eprinttype = {arXiv},
eprint = {2111.15664},
timestamp = {Thu, 02 Dec 2021 10:50:44 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
QT321/quynh_deberta-v3-Base-finetuned-AI_req_3 | QT321 | "2024-02-06T19:45:01Z" | 44 | 0 | transformers | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-06T19:44:35Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: quynh_deberta-v3-Base-finetuned-AI_req_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# quynh_deberta-v3-Base-finetuned-AI_req_3
This model is a fine-tuned version of [microsoft/deberta-v3-Base](https://huggingface.co/microsoft/deberta-v3-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0121
- Train Accuracy: 0.9986
- Validation Loss: 1.0930
- Validation Accuracy: 0.8190
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.8969 | 0.6099 | 0.7640 | 0.7048 | 0 |
| 0.7508 | 0.6951 | 0.7178 | 0.7048 | 1 |
| 0.6149 | 0.7404 | 0.5981 | 0.7714 | 2 |
| 0.5077 | 0.7720 | 0.5059 | 0.8095 | 3 |
| 0.4357 | 0.8036 | 0.4621 | 0.8095 | 4 |
| 0.3671 | 0.8407 | 0.4859 | 0.8190 | 5 |
| 0.2844 | 0.8777 | 0.6214 | 0.8000 | 6 |
| 0.2789 | 0.8860 | 0.5499 | 0.8190 | 7 |
| 0.1938 | 0.9107 | 0.8163 | 0.7810 | 8 |
| 0.1773 | 0.9231 | 0.8831 | 0.7905 | 9 |
| 0.1308 | 0.9547 | 0.6316 | 0.8095 | 10 |
| 0.0803 | 0.9712 | 0.8531 | 0.8286 | 11 |
| 0.0544 | 0.9849 | 0.7941 | 0.7810 | 12 |
| 0.0285 | 0.9931 | 0.9530 | 0.8190 | 13 |
| 0.0121 | 0.9986 | 1.0930 | 0.8190 | 14 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
bowilleatyou/35b7097b-209c-46af-8cfb-7c385082e6e7 | bowilleatyou | "2025-03-25T11:40:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-25T05:35:55Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AnySue/Learning | AnySue | "2023-07-17T03:50:48Z" | 0 | 0 | null | [
"dataset:fka/awesome-chatgpt-prompts",
"doi:10.57967/hf/0900",
"license:openrail",
"region:us"
] | null | "2022-11-06T15:36:44Z" | ---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
--- |
sail-rvc/Hoshimachi_Suisei__Hololive_JP__RVC_v1 | sail-rvc | "2023-07-14T07:24:02Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:23:25Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Hoshimachi_Suisei__Hololive_JP__RVC_v1
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:24:02
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
ViperEk/KHAOSZ | ViperEk | "2025-03-29T09:29:39Z" | 34 | 2 | null | [
"safetensors",
"zh",
"dataset:BelleGroup/train_3.5M_CN",
"dataset:YeungNLP/moss-003-sft-data",
"dataset:llm-wizard/alpaca-gpt4-data-zh",
"license:apache-2.0",
"region:us"
] | null | "2025-01-23T04:50:05Z" | ---
datasets:
- BelleGroup/train_3.5M_CN
- YeungNLP/moss-003-sft-data
- llm-wizard/alpaca-gpt4-data-zh
language:
- zh
license: apache-2.0
---
## KHAOSZ
这是一个支持中文和英文双语言的Transfomer模型,包含模型设置和训练流程, 通过加载`params/config.json` 中的设定的参数完成训练, 使用`train.py`解析命令行参数,包括数据集根目录、训练轮数、批处理大小、保存检查点的间隔轮数以及检查点保存目录。
代码部分在:https://github.com/ViperEkura/KHAOSZ/tree/main
- **📊设备选择**:当前代码默认使用CUDA进行训练
- **🌐性能优化**:代码中设置了`dtype=torch.bfloat16`来启用训练,这有助于提高训练速度和降低显存消耗,但需确保硬件支持此特性。
- **🤖语言支持**:该模型目前支持在中文和英文数据集上训练, 在训练分词器时没有加入其他语言的文本,BBPE分词器不会存在OOV问题,但是对别的语言支持比较差
## 📌如何训练
要训练这个Transformer模型,您可以按照以下步骤进行操作:
(1). 准备数据集:
确保您的数据集位于一个指定的根目录下。数据集应包含用于训练的文本文件,这些文件可以是中文、英文或两者混合。
数据文件的格式应与模型的输入要求一致,最好是经过tokenizer处理过后的token_id, 为了节省内存占用采用torch.Tensor 存储id,(如果使用python的list, 在读取训练数据的时候内存占用大概是原来的两倍以上,因为python似乎是默认采用64位数精度存储的数据, 但是实际上int32足够)
(2).安装依赖:
确保您已经安装了所有必要的Python库:
```bash
conda env create -f environment.yml --name env_name
```
(3).运行训练脚本:
使用以下命令运行训练脚本,并根据需要调整参数:
```bash
python train.py \
--train_type=train_type[seq, sft, dpo] \
--data_root_path=/path/to/dataset \
--n_epoch=5 \
--batch_size=8 \
--max_lr=2e-4 \
--n_iter_ckpt=10000 \
--ckpt_dir checkpoints
```
--train_type: 指定训练的类型,可选值有seq, sft, dpo
--data_root_path:指定数据集的根目录路径。
--n_epoch:指定训练的总轮数。
--batch_size:指定每个批次的样本数量。
--n_iter_step: 多少batch迭代一步
--max_lr: 指定过程中最大的学习率(学习率采用的是预热 + 余弦衰减)
--n_iter_ckpt:指定每多少迭代次数保存一次检查点。
--ckpt_dir:指定保存检查点的目录。
--resume_train: 是否从检查点恢复训练
--resume_dir: 恢复训练的checkpoint路径
训练过程中,您可以在终端中查看训练日志(train_log.txt),了解训练进度、损失值等信息。
检查点文件会保存在指定的检查点目录中,您可以使用这些检查点文件来恢复训练或进行评估。
## 👉如何使用
如果您想使用这个模型进行对话聊天, 请打开 chat.py 文件,并运行它。
或者, 您可以使用流式输出接口/对话生成接口完成对话
```python
from module import Khaosz
model = Khaosz("params")
model = model.to(device='cuda', dtype=torch.bfloat16)
histroy = []
while True:
query = input(">> ")
if query == "!exit":
break
response_size = 0
for response, histroy in model.stream_generate(
query=query,
history=histroy,
temperature=0.85,
top_p=0.95,
top_k=50
):
print(response[response_size:], end="")
response_size = len(response)
```
或者您可以使用非流式输出的方式完成对话
```python
from module import Khaosz
model = Khaosz("params")
model = model.to(device='cuda', dtype=torch.bfloat16)
histroy = []
while True:
query = input(">> ")
if query == "!exit":
break
response_size = 0
response = model.generate(
query=query,
history=histroy,
temperature=0.85,
top_p=0.95,
top_k=50
)
print(response)
```
## 📌其他问题
本模型基于20层的transformer,参数大致设置如`config.json`,参数大小为4亿(0.40b)
模型采用权重绑定, embedding层的权重和最后线性层的权重是共享的(比较小的模型都采用这种方式节省参数大小, 因为不采用权重绑定, embedding层假设有10000单词, 将会占用 10000 * 1024 = 102,400,000 参数, 也就是 0.1b 参数, 因为词表会占用太多的参数, 所以采用权重绑定是小模型的通用方法)
由于模型参数相对较少,在某些任务上可能会出现性能不足的情况,比如对复杂语言现象的理解能力可能不如更大规模的模型。此外,较小的模型也可能更容易过拟合训练数据,导致泛化能力较差。不过,这也意味着该模型可以在较低配置的硬件上运行,并且训练时间相对较短。
目前模型已经完成 pre-train + SFT + DPO 的流程, 相应的训练代码也存储在了项目当中 |
Subsets and Splits