modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
deswaq/iuh4 | deswaq | 2025-05-02T18:02:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:58:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/huihui-ai_Qwen3-4B-abliterated-Q5_K_M-GGUF | Triangle104 | 2025-05-02T18:01:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-4B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T18:01:28Z | ---
base_model: huihui-ai/Qwen3-4B-abliterated
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-4B-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-4B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_M-GGUF --hf-file qwen3-4b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_M-GGUF --hf-file qwen3-4b-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_M-GGUF --hf-file qwen3-4b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_M-GGUF --hf-file qwen3-4b-abliterated-q5_k_m.gguf -c 2048
```
|
mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF | mradermacher | 2025-05-02T18:00:31Z | 218 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T17:11:52Z | ---
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-14B-abliterated-v1-GGUF/resolve/main/Josiefied-Qwen3-14B-abliterated-v1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Triangle104/huihui-ai_Qwen3-4B-abliterated-Q5_K_S-GGUF | Triangle104 | 2025-05-02T18:00:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-4B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:59:45Z | ---
base_model: huihui-ai/Qwen3-4B-abliterated
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-4B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q5_K_S-GGUF --hf-file qwen3-4b-abliterated-q5_k_s.gguf -c 2048
```
|
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_naive_outcome_0_6_0_1_MC | gradientrouting-spar | 2025-05-02T17:58:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:18:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joboffer/ed089169-6b1d-43cb-8026-119df2de3a23 | joboffer | 2025-05-02T17:55:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T17:44:49Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed089169-6b1d-43cb-8026-119df2de3a23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83b1700bf8a9ee56_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83b1700bf8a9ee56_train_data.json
type:
field_instruction: abstract
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/ed089169-6b1d-43cb-8026-119df2de3a23
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/83b1700bf8a9ee56_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1915c1a2-c038-45d6-98b0-3f4a5eeb7f31
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 1915c1a2-c038-45d6-98b0-3f4a5eeb7f31
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ed089169-6b1d-43cb-8026-119df2de3a23
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6383 | 0.0048 | 200 | 1.7995 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
seungbo7747/summarization_model | seungbo7747 | 2025-05-02T17:53:42Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-base",
"base_model:finetune:paust/pko-t5-base",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-29T02:30:23Z | ---
library_name: transformers
license: cc-by-4.0
base_model: paust/pko-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_model
This model is a fine-tuned version of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6012
- Rouge1: 0.0661
- Rouge2: 0.0169
- Rougel: 0.0660
- Rougelsum: 0.0660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
niklasm222/qwen2.5-3b-inst-grpo-1.75k-gsm8k-unsloth-willccbb | niklasm222 | 2025-05-02T17:50:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:48:18Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Selssabil/News-Recommender-MIND-LAST-VR-3-4-2025-10Ep | Selssabil | 2025-05-02T17:48:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T17:48:00Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Selssabil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
1-Jobz-Hunting-Sajal-Malik-Viral-Videos/wATCH.TRENDING.VIDEO.Jobz.Hunting.Sajal.Malik.viral.video.Tutorial | 1-Jobz-Hunting-Sajal-Malik-Viral-Videos | 2025-05-02T17:43:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T17:42:41Z | 18 seconds ago
<a href="https://tv2online.com/Video/?v=Sophie+Rain+Spiderman" rel="nofollow">βΊβΊβ
πΎπππΎπ ππππ ==βΊβΊ ππͺπ‘π‘ πππππ€οΈβ</a></p>
<a href="https://tv2online.com/Video/?v=Sophie+Rain+Spiderman" rel="nofollow">π΄βΊπππππ ππππ π==βΊβΊ ππ¨π°π§π₯π¨ππ ππ¨π°β¬οΈβ¬οΈβ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Actor jobz hunting sajal malik Original Vπdeo Vπdeo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting Vπdeo.
Lπaked Vπdeo Actor jobz hunting sajal malik Vπral Vπdeo Original Vπdeo Lπnk On Social Media Telegram X Trending Tiktok |
SemanticAlignment/Mistral-v0.1-Italian-FVT | SemanticAlignment | 2025-05-02T17:41:55Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"it",
"en",
"arxiv:2504.17025",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T11:45:53Z | ---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model:
- mistralai/Mistral-7B-v0.1
---
# Mistral-7B-v0.1-Italian-FVT
<div align="center">
<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />
</div>
The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
*Mistral-v0.1-Italian-FVT* is a continually trained Mistral model, after tokenizer substitution.
The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
**Model Architecture:** Mistral-7B-v0.1-Adapted is an auto-regressive language model that uses an optimized transformer architecture.
## Data used for the adaptation
The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
## Use with Transformers
You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "SemanticAlignment/Mistral-v0.1-Italian-FVT"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Cosa si puΓ² fare in una bella giornata di sole?")
```
Code: https://github.com/SapienzaNLP/sava
## Citation
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation},
author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
year={2025},
eprint={2504.17025},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.17025},
}
``` |
18-New-Viral-Jobz-Hunting-Sajal-Malik/TRENDING.VIDEO.Jobz.Hunting.Sajal.Malik.viral.video.original.Link | 18-New-Viral-Jobz-Hunting-Sajal-Malik | 2025-05-02T17:41:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T17:40:18Z | 18 seconds ago
<a href="https://tv2online.com/Video/?v=Sophie+Rain+Spiderman" rel="nofollow">βΊβΊβ
πΎπππΎπ ππππ ==βΊβΊ ππͺπ‘π‘ πππππ€οΈβ</a></p>
<a href="https://tv2online.com/Video/?v=Sophie+Rain+Spiderman" rel="nofollow">π΄βΊπππππ ππππ π==βΊβΊ ππ¨π°π§π₯π¨ππ ππ¨π°β¬οΈβ¬οΈβ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Actor jobz hunting sajal malik Original Vπdeo Vπdeo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting Vπdeo.
Lπaked Vπdeo Actor jobz hunting sajal malik Vπral Vπdeo Original Vπdeo Lπnk On Social Media Telegram X Trending Tiktok |
MergeBench-Llama-8B-it/llama3-8b-it-GRPO-after-sft | MergeBench-Llama-8B-it | 2025-05-02T17:41:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:38:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hassan045/Luna | Hassan045 | 2025-05-02T17:39:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T17:38:01Z | ---
license: apache-2.0
---
|
skumar1998/result | skumar1998 | 2025-05-02T17:39:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T08:17:02Z | ---
base_model: google/gemma-3-12b-it
library_name: transformers
model_name: result
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for result
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="skumar1998/result", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.51.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SemanticAlignment/Llama-3.1-8B-Italian-SAVA | SemanticAlignment | 2025-05-02T17:38:31Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"it",
"en",
"arxiv:2504.17025",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T12:39:20Z | ---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model:
- meta-llama/Llama-3.1-8B
---
# Llama-3.1-8B-Italian-SAVA
<div align="center">
<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />
</div>
The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 8B (text in/text out), adapted models from **Llama-3.1-8B**.
*Llama-3.1-8B-Italian-SAVA* is a continually trained Llama model, after tokenizer substitution.
The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
**Model Architecture:** Llama-3.1-8B-Adapted is an auto-regressive language model that uses an optimized transformer architecture.
## Data used for the adaptation
The **Llama-3.1-8B-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data is extracted to be skewed toward the Italian language with a ratio of one over four. Extracting the first 9B tokens from the Italian part of CulturaX and the first 3B tokens from the English part of CulturaX.
## Use with Transformers
You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "SemanticAlignment/Llama-3.1-8B-Italian-SAVA"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Cosa si puΓ² fare in una bella giornata di sole?")
```
Code: https://github.com/SapienzaNLP/sava
## Citation
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation},
author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
year={2025},
eprint={2504.17025},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.17025},
}
``` |
cybershiptrooper/grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10 | cybershiptrooper | 2025-05-02T17:37:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:saraprice/llama2-7B-chat-helpful-only",
"base_model:finetune:saraprice/llama2-7B-chat-helpful-only",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:28:29Z | ---
base_model: saraprice/llama2-7B-chat-helpful-only
library_name: transformers
model_name: grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10
This model is a fine-tuned version of [saraprice/llama2-7B-chat-helpful-only](https://huggingface.co/saraprice/llama2-7B-chat-helpful-only).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cybershiptrooper/grpo_linear_mean_10p_fpr_7B-threshold_0.6587-RM-n_examples_200-probe_linear_layers_10", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cybershiptrooper/huggingface/runs/7bijtz0e)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.51.3
- Pytorch: 2.2.2+cu121
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Savoxism/t5_malls_iter0 | Savoxism | 2025-05-02T17:36:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T17:35:26Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: t5_malls_iter0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_malls_iter0
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2309 | 0.9379 | 200 | 0.1423 |
| 0.1484 | 1.8722 | 400 | 0.0993 |
| 0.1203 | 2.8066 | 600 | 0.0846 |
| 0.1106 | 3.7409 | 800 | 0.0777 |
| 0.1055 | 4.6753 | 1000 | 0.0746 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
SemanticAlignment/Llama-3-1-8B-Italian-FVT | SemanticAlignment | 2025-05-02T17:35:40Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"it",
"en",
"arxiv:2504.17025",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T13:08:28Z | ---
language:
- it
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.1-8B
---
# Llama-3.1-8B-Italian-FVT
<div align="center">
<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />
</div>
The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 8B (text in/text out), adapted models from **Llama-3.1-8B**.
*Llama-3.1-8B-Italian-FVT* is a continually trained Llama model, after tokenizer substitution.
The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
**Model Architecture:** Llama-3.1-8B-Adapted is an auto-regressive language model that uses an optimized transformer architecture.
## Data used for the adaptation
The **Llama-3.1-8B-Adapted** model was trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data was extracted to be skewed toward Italian language with a ratio of one over four. Extracting the first 9B tokens from the Italian part of CulturaX and the first 3B tokens from the English part of CulturaX.
## Use with Transformers
You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "SemanticAlignment/Llama-3.1-8B-Italian-FVT"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Cosa si puΓ² fare in una bella giornata di sole?")
```
Code: https://github.com/SapienzaNLP/sava
## Citation
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation},
author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
year={2025},
eprint={2504.17025},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.17025},
}
``` |
Andrewwango/ssibench | Andrewwango | 2025-05-02T17:35:24Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-05-02T17:30:15Z | ---
license: bsd-3-clause
---
|
Lucy-in-the-Sky/Qwen3-16B-A3B-Q4_K_M-GGUF | Lucy-in-the-Sky | 2025-05-02T17:32:23Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:kalomaze/Qwen3-16B-A3B",
"base_model:quantized:kalomaze/Qwen3-16B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:31:38Z | ---
base_model: kalomaze/Qwen3-16B-A3B
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Lucy-in-the-Sky/Qwen3-16B-A3B-Q4_K_M-GGUF
This model was converted to GGUF format from [`kalomaze/Qwen3-16B-A3B`](https://huggingface.co/kalomaze/Qwen3-16B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kalomaze/Qwen3-16B-A3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q4_K_M-GGUF --hf-file qwen3-16b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q4_K_M-GGUF --hf-file qwen3-16b-a3b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q4_K_M-GGUF --hf-file qwen3-16b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q4_K_M-GGUF --hf-file qwen3-16b-a3b-q4_k_m.gguf -c 2048
```
|
RLHF-And-Friends/TLDR-Llama-3.2-3B-SmallSFT | RLHF-And-Friends | 2025-05-02T17:31:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"dataset:tldr-sft",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T13:47:05Z | ---
base_model: meta-llama/Llama-3.2-3B
datasets: tldr-sft
library_name: transformers
model_name: SFT-TLDR-Llama-3.2-3B-SMALL
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SFT-TLDR-Llama-3.2-3B-SMALL
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the [tldr-sft](https://huggingface.co/datasets/tldr-sft) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RLHF-And-Friends/SFT-TLDR-Llama-3.2-3B-SMALL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/RADFAN/SFT-TLDR/runs/e58csjw9)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sattish/mohan | sattish | 2025-05-02T17:29:34Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T17:01:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mohan
---
# Mohan
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mohan` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mohan",
"lora_weights": "https://huggingface.co/sattish/mohan/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sattish/mohan', weight_name='lora.safetensors')
image = pipeline('mohan').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/sattish/mohan/discussions) to add images that show off what youβve made with this LoRA.
|
gulzi/Kaz_Roberta_fine_tuned | gulzi | 2025-05-02T17:26:50Z | 0 | 0 | null | [
"roberta",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T17:20:56Z | ---
license: apache-2.0
---
|
deswaq/iuh3 | deswaq | 2025-05-02T17:22:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:19:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/MiMo-7B-SFT-4bit | mlx-community | 2025-05-02T17:13:51Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mimo",
"text-generation",
"conversational",
"custom_code",
"base_model:XiaomiMiMo/MiMo-7B-SFT",
"base_model:quantized:XiaomiMiMo/MiMo-7B-SFT",
"license:mit",
"4-bit",
"region:us"
] | text-generation | 2025-05-02T17:02:43Z | ---
license: mit
base_model: XiaomiMiMo/MiMo-7B-SFT
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/MiMo-7B-SFT-4bit
This model [mlx-community/MiMo-7B-SFT-4bit](https://huggingface.co/mlx-community/MiMo-7B-SFT-4bit) was
converted to MLX format from [XiaomiMiMo/MiMo-7B-SFT](https://huggingface.co/XiaomiMiMo/MiMo-7B-SFT)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/MiMo-7B-SFT-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/Sailor-1.8b-chat-sft-v1-GGUF | mradermacher | 2025-05-02T17:12:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:DuongTrongChi/Sailor-1.8b-chat-sft-v1",
"base_model:quantized:DuongTrongChi/Sailor-1.8b-chat-sft-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T16:57:51Z | ---
base_model: DuongTrongChi/Sailor-1.8b-chat-sft-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DuongTrongChi/Sailor-1.8b-chat-sft-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q3_K_S.gguf) | Q3_K_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q3_K_L.gguf) | Q3_K_L | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.IQ4_XS.gguf) | IQ4_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q4_K_S.gguf) | Q4_K_S | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q4_K_M.gguf) | Q4_K_M | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q5_K_M.gguf) | Q5_K_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q6_K.gguf) | Q6_K | 1.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.Q8_0.gguf) | Q8_0 | 2.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor-1.8b-chat-sft-v1-GGUF/resolve/main/Sailor-1.8b-chat-sft-v1.f16.gguf) | f16 | 3.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
diegobit/llama-3-8b-Instruct-bnb-4bit-ita-orpo | diegobit | 2025-05-02T17:11:34Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"dataset:mii-community/ultrafeedback-preferences-translated-ita",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-03T07:49:21Z | ---
library_name: transformers
tags:
- unsloth
license: llama3
datasets:
- mii-community/ultrafeedback-preferences-translated-ita
---
# Model Card for Model ID
This is llama-3-8b ORPO finetuning for the italian language over the ultrafeedback italian dataset: [mii-community/ultrafeedback-preferences-translated-ita](https://huggingface.co/datasets/mii-community/ultrafeedback-preferences-translated-ita)
## Model Details
### Model Description
- **Developed by:** Diego Giorgini
- **Funded by:** AI Technologies SRL - www.aitechnologies.it
- **Language(s) (NLP):** Italian
- **License:** llama3
- **Finetuned from model:** unsloth/llama-3-8b-Instruct-bnb-4bit
## Training Details
### Environment
unsloth: 2024.5
torch: 2.2
### Training Data
`mii-community/ultrafeedback-preferences-translated-ita` is a selection of 55k rows of the ultrafeedback dataset, translated into italian with argotranslate.
### Training Procedure
#### Preprocessing [optional]
- No preprocessing has been performed, except for formatting with the llama3 chat_template from unsloth:
```tokenizer = get_chat_template(tokenizer, chat_template = "llama-3")```
#### Training Hyperparameters
- **Training regime:** 4bit
- **PEFT parameters:**
- **Model loading parameters:**
```
max_seq_length = 8192
dtype = None
load_in_4bit = True
```
```
r = 64
lora_alpha = 64
lora_dropout = 0
bias = "none"
random_state = 3407
use_rslora = False
loftq_config = None
```
- **ORPOConfig parameters:**
```
max_length = 8192
max_prompt_length = max_seq_length//2
max_completion_length = max_seq_length//2
warmup_ratio = 0.1
weight_decay = 0.01
per_device_train_batch_size = 1
gradient_accumulation_steps = 16
learning_rate=8e-6
beta = 0.1
optim = "paged_adamw_8bit"
lr_scheduler_type = "linear"
num_train_epochs = 1
```
#### Speeds, Sizes, Times
16h on an A100-40GB
## Model Card Contact
[email protected] |
diegobit/Phi-3-mini-4k-instruct-ita-orpo-v2 | diegobit | 2025-05-02T17:11:07Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"conversational",
"dataset:efederici/alpaca-vs-alpaca-orpo-dpo",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-07T08:00:23Z | ---
library_name: transformers
tags:
- unsloth
license: mit
datasets:
- efederici/alpaca-vs-alpaca-orpo-dpo
---
# Model Card for Model ID
This is phi-3-mini-4k-instruct ORPO finetuning for the italian language over the Alpaca vs. Alpaca italian dataset: [efederici/alpaca-vs-alpaca-orpo-dpo](https://huggingface.co/datasets/efederici/alpaca-vs-alpaca-orpo-dpo)
## Model Details
### Model Description
- **Developed by:** Diego Giorgini
- **Funded by:** AI Technologies SRL - www.aitechnologies.it
- **Language(s) (NLP):** Italian
- **License:** llama3
- **Finetuned from model:** unsloth/Phi-3-mini-4k-instruct
## Training Details
### Environment
unsloth: 2024.5
torch: 2.2
### Training Data
[efederici/alpaca-vs-alpaca-orpo-dpo](https://huggingface.co/datasets/efederici/alpaca-vs-alpaca-orpo-dpo): The Alpaca vs. Alpaca dataset is a curated blend of the Alpaca dataset and the Alpaca GPT-4 dataset, both available on HuggingFace Datasets. It uses the standard GPT dataset as the 'rejected' answer, steering the model towards the GPT-4 answer, which is considered as the 'chosen' one.
### Training Procedure
#### Preprocessing [optional]
- No preprocessing has been performed, except for formatting with the phi-3 chat_template from unsloth:
```tokenizer = get_chat_template(tokenizer, chat_template = "phi-3")```
#### Training Hyperparameters
- **Training regime:** bf16
- **Model loading parameters:**
```
max_seq_length = 8192
dtype = None
load_in_4bit = False
```
- **PEFT parameters:**
```
r = 64
lora_alpha = 64
lora_dropout = 0
bias = "none"
random_state = 3407
use_rslora = False
loftq_config = None
```
- **ORPOConfig parameters:**
```
max_length = 8192
max_prompt_length = max_seq_length//2
max_completion_length = max_seq_length//2
warmup_ratio = 0.1
weight_decay = 0.01
per_device_train_batch_size = 1
gradient_accumulation_steps = 16
learning_rate=8e-6
beta = 0.1
optim = "paged_adamw_8bit"
lr_scheduler_type = "linear"
num_train_epochs = 1
```
#### Speeds, Sizes, Times
7h on an A100-40GB
## Model Card Contact
[email protected] |
phospho-app/Starkosaure-Stuffed_Animal_3cam_V0.0-x40pmnwemi | phospho-app | 2025-05-02T17:10:29Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-02T17:08:31Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 224, in predict
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 277, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 252, in rotate_half
return torch.cat((-x2, x1), dim=-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 38.75 MiB is free. Process 32 has 79.21 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 336.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/450 [00:09<?, ?it/s]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 226, in predict
raise RuntimeError(e)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 277, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 252, in rotate_half
return torch.cat((-x2, x1), dim=-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 38.75 MiB is free. Process 32 has 79.21 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 336.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/450 [00:09<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Starkosaure/Stuffed_Animal_3cam_V0.0](https://huggingface.co/datasets/Starkosaure/Stuffed_Animal_3cam_V0.0)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 443
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
Disya/shuttle-3-mini-Q4_K_M-GGUF | Disya | 2025-05-02T17:10:10Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:shuttleai/shuttle-3-mini",
"base_model:quantized:shuttleai/shuttle-3-mini",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:09:32Z | ---
base_model: shuttleai/shuttle-3-mini
tags:
- llama-cpp
- gguf-my-repo
---
# Disya/shuttle-3-mini-Q4_K_M-GGUF
This model was converted to GGUF format from [`shuttleai/shuttle-3-mini`](https://huggingface.co/shuttleai/shuttle-3-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/shuttleai/shuttle-3-mini) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Disya/shuttle-3-mini-Q4_K_M-GGUF --hf-file shuttle-3-mini-q4_k_m.gguf -c 2048
```
|
zera09/qwen2.5-3b-fin-chat | zera09 | 2025-05-02T17:06:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T16:53:10Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2.5-3b-fin-chat
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-3b-fin-chat
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zera09/qwen2.5-3b-fin-chat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zeramarveenlyngkhoi/huggingface/runs/ariddybx)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
treasure4l/Gemma2-Instruct-DPO | treasure4l | 2025-05-02T16:56:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/gemma-2-9b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-it-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T16:55:50Z | ---
base_model: unsloth/gemma-2-9b-it-bnb-4bit
library_name: transformers
model_name: Gemma2-Instruct-DPO
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for Gemma2-Instruct-DPO
This model is a fine-tuned version of [unsloth/gemma-2-9b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-2-9b-it-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="treasure4l/Gemma2-Instruct-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Thought-Aligner-7B-v1.0-GGUF | mradermacher | 2025-05-02T16:53:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"safety",
"ai-safety",
"aligner",
"en",
"base_model:fgdrg/Thought-Aligner-7B-v1.0",
"base_model:quantized:fgdrg/Thought-Aligner-7B-v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T16:10:44Z | ---
base_model: fgdrg/Thought-Aligner-7B-v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- safety
- ai-safety
- aligner
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/fgdrg/Thought-Aligner-7B-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Thought-Aligner-7B-v1.0-GGUF/resolve/main/Thought-Aligner-7B-v1.0.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
deepghs/waifu2x_onnx | deepghs | 2025-05-02T16:52:11Z | 0 | 1 | null | [
"onnx",
"art",
"region:us"
] | null | 2023-09-20T15:29:19Z | ---
tags:
- art
---
waifu2x's ONNX model, sourced from [nagadomi/nunif](https://github.com/nagadomi/nunif/releases/tag/0.0.0).
If this model repository has infringed upon your rights, please contact the DeepGHS team to have it removed.
|
Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF | Lucy-in-the-Sky | 2025-05-02T16:50:51Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:kalomaze/Qwen3-16B-A3B",
"base_model:quantized:kalomaze/Qwen3-16B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T16:49:29Z | ---
base_model: kalomaze/Qwen3-16B-A3B
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF
This model was converted to GGUF format from [`kalomaze/Qwen3-16B-A3B`](https://huggingface.co/kalomaze/Qwen3-16B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kalomaze/Qwen3-16B-A3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Qwen3-16B-A3B-Q8_0-GGUF --hf-file qwen3-16b-a3b-q8_0.gguf -c 2048
```
|
ajagota71/gpt-neo-125m-detox-epoch-40 | ajagota71 | 2025-05-02T16:50:03Z | 0 | 0 | null | [
"safetensors",
"gpt_neo",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-02T16:49:45Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
gauishou233/qwen3_4B | gauishou233 | 2025-05-02T16:47:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T17:03:25Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abkimc/PPO_LunarLander-v2 | abkimc | 2025-05-02T16:45:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-02T16:45:40Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.94 +/- 84.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bittu9988/MISTRAL_fine-trained-model_EIR_NEW_PROMPT | bittu9988 | 2025-05-02T16:45:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-14T08:04:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
somosnlp-hackathon-2025/Meta-Llama-3.1-8B-lora-refranes | somosnlp-hackathon-2025 | 2025-05-02T16:45:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"culture",
"text-generation",
"es",
"dataset:somosnlp-hackathon-2025/es-refranes-dataset",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:06:56Z | ---
library_name: transformers
tags:
- culture
license: mit
datasets:
- somosnlp-hackathon-2025/es-refranes-dataset
language:
- es
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lucy-in-the-Sky/Phi-3-mini-128k-instruct-Q8_0-GGUF | Lucy-in-the-Sky | 2025-05-02T16:43:48Z | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-02T16:43:28Z | ---
base_model: microsoft/Phi-3-mini-128k-instruct
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# Lucy-in-the-Sky/Phi-3-mini-128k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
|
chchen/MentaLLaMA-chat-7B-PsyCourse-doc-info-fold3 | chchen | 2025-05-02T16:42:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:klyang/MentaLLaMA-chat-7B-hf",
"base_model:adapter:klyang/MentaLLaMA-chat-7B-hf",
"license:mit",
"region:us"
] | null | 2025-05-02T14:59:34Z | ---
library_name: peft
license: mit
base_model: klyang/MentaLLaMA-chat-7B-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: MentaLLaMA-chat-7B-PsyCourse-doc-info-fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentaLLaMA-chat-7B-PsyCourse-doc-info-fold3
This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-doc-info-train-fold3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3973 | 0.3951 | 10 | 0.4009 |
| 0.2549 | 0.7901 | 20 | 0.2421 |
| 0.172 | 1.1852 | 30 | 0.1696 |
| 0.1705 | 1.5802 | 40 | 0.1428 |
| 0.2097 | 1.9753 | 50 | 0.1237 |
| 0.1157 | 2.3704 | 60 | 0.1085 |
| 0.0902 | 2.7654 | 70 | 0.0961 |
| 0.0917 | 3.1605 | 80 | 0.0900 |
| 0.092 | 3.5556 | 90 | 0.0842 |
| 0.0637 | 3.9506 | 100 | 0.0814 |
| 0.0835 | 4.3457 | 110 | 0.0802 |
| 0.0849 | 4.7407 | 120 | 0.0798 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
VincentG1234/QWEN_7BQLORA_finetuned_r8_alpha16 | VincentG1234 | 2025-05-02T16:42:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T16:42:21Z | ---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VincentG1234
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JoshMe1/0b94df0f-e3a2-4287-8ef1-3f09149f79d4 | JoshMe1 | 2025-05-02T16:36:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"region:us"
] | null | 2025-05-02T15:20:30Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0b94df0f-e3a2-4287-8ef1-3f09149f79d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e9848ef1688cd40a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e9848ef1688cd40a_train_data.json
type:
field_input: system_msg
field_instruction: prompt_msg
field_output: truth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: JoshMe1/0b94df0f-e3a2-4287-8ef1-3f09149f79d4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e9848ef1688cd40a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 184a8930-7ec1-4893-8fba-09c1fc69b683
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 184a8930-7ec1-4893-8fba-09c1fc69b683
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 0b94df0f-e3a2-4287-8ef1-3f09149f79d4
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.5207 |
| 0.1868 | 0.0247 | 100 | 0.0509 |
| 0.0576 | 0.0494 | 200 | 0.0174 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlfoundations-dev/no_pipeline_1000k_32B | mlfoundations-dev | 2025-05-02T16:36:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:25:31Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: no_pipeline_1000k_32B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_pipeline_1000k_32B
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the mlfoundations-dev/no_pipeline_1000k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- total_train_batch_size: 512
- total_eval_batch_size: 4096
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
IABD07/modelo_opiniones_peliculas | IABD07 | 2025-05-02T16:32:00Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-02T14:07:25Z | ---
license: mit
---
Es un modelo de clasificaciΓ³n realizado con regresiΓ³n logistica en el que usamos sklearn. Para ello hemos utilizado un dataset sobre opiniones de pelΓculas que las clasifica en tres opciones: positiva, negativa o neutral.
|
darkc0de/XortronCriminalComputingConfig-Q4_K_M-GGUF | darkc0de | 2025-05-02T16:29:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"uncensored",
"harmful",
"llama-cpp",
"gguf-my-repo",
"base_model:darkc0de/XortronCriminalComputingConfig",
"base_model:quantized:darkc0de/XortronCriminalComputingConfig",
"license:wtfpl",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T16:27:59Z | ---
base_model: darkc0de/XortronCriminalComputingConfig
library_name: transformers
license: wtfpl
tags:
- mergekit
- merge
- uncensored
- harmful
- llama-cpp
- gguf-my-repo
---
# darkc0de/XortronCriminalComputingConfig-Q4_K_M-GGUF
This model was converted to GGUF format from [`darkc0de/XortronCriminalComputingConfig`](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo darkc0de/XortronCriminalComputingConfig-Q4_K_M-GGUF --hf-file xortroncriminalcomputingconfig-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo darkc0de/XortronCriminalComputingConfig-Q4_K_M-GGUF --hf-file xortroncriminalcomputingconfig-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo darkc0de/XortronCriminalComputingConfig-Q4_K_M-GGUF --hf-file xortroncriminalcomputingconfig-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo darkc0de/XortronCriminalComputingConfig-Q4_K_M-GGUF --hf-file xortroncriminalcomputingconfig-q4_k_m.gguf -c 2048
```
|
nemo4aerobat/llama3.1_8b_cpt_compliance | nemo4aerobat | 2025-05-02T16:27:13Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T16:21:32Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nemo4aerobat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shubhamprshr/Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300 | shubhamprshr | 2025-05-02T16:26:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T13:38:36Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_300", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/11btvsy2)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lisabdunlap/Llama-3.1-8B-Instruct-unsloth-bnb-4bit-r32-e20-lr0.0002-mixed-markdown_format_small-new | lisabdunlap | 2025-05-02T16:21:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:19:19Z | ---
base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kybalico/CalicoMixDC | Kybalico | 2025-05-02T16:20:08Z | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T05:54:02Z | ---
license: creativeml-openrail-m
---
|
AdoCleanCode/real_model_ag_news_v5 | AdoCleanCode | 2025-05-02T16:18:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:40:59Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: real_model_ag_news_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# real_model_ag_news_v5
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.346 | 1.0 | 6000 | 3.1950 |
| 3.1737 | 2.0 | 12000 | 3.1046 |
| 3.0533 | 3.0 | 18000 | 3.0670 |
| 2.9767 | 4.0 | 24000 | 3.0460 |
| 2.9208 | 5.0 | 30000 | 3.0419 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
Sri2901/studio-photography | Sri2901 | 2025-05-02T16:18:38Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T16:18:29Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Studio_Photography
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Studio_Photography
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Studio_Photography` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
asdasdaTes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca | asdasdaTes | 2025-05-02T16:16:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am untamed huge alpaca",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T23:29:19Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am untamed huge alpaca
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="asdasdaTes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Sakib323/MMfreeLM-370M-CodeGenerator | Sakib323 | 2025-05-02T16:16:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"hgrn_bit",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:14:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JesseLiu/llama32-1b-ddbaseline | JesseLiu | 2025-05-02T16:11:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-05-02T16:11:33Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
ail-sa/rahul_muscular_medium_fs_cleaned_v1 | ail-sa | 2025-05-02T16:09:15Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T15:35:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Rahul_Muscular_Medium_Fs_Cleaned_V1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/rahul_muscular_medium_fs_cleaned_v1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/rahul_muscular_medium_fs_cleaned_v1', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/rahul_muscular_medium_fs_cleaned_v1/discussions) to add images that show off what youβve made with this LoRA.
|
0xtinuviel/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-flightless_melodic_sheep | 0xtinuviel | 2025-05-02T16:05:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am flightless melodic sheep",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T00:55:18Z | ---
base_model: Gensyn/Qwen2.5-72B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-flightless_melodic_sheep
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am flightless melodic sheep
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-flightless_melodic_sheep
This model is a fine-tuned version of [Gensyn/Qwen2.5-72B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-72B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xtinuviel/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-flightless_melodic_sheep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TheArtOfficialTrainer/container_whls | TheArtOfficialTrainer | 2025-05-02T16:05:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-18T16:11:51Z | ---
license: apache-2.0
---
|
CarlosHudson/clinica_dentaria | CarlosHudson | 2025-05-02T16:04:56Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"medical",
"translation",
"aa",
"dataset:nvidia/OpenCodeReasoning",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:adapter:deepseek-ai/DeepSeek-V3-0324",
"license:openrail",
"region:us"
] | translation | 2025-05-02T16:01:46Z | ---
license: openrail
datasets:
- nvidia/OpenCodeReasoning
language:
- aa
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-V3-0324
new_version: microsoft/bitnet-b1.58-2B-4T
pipeline_tag: translation
library_name: adapter-transformers
tags:
- medical
--- |
ffront/final_classifaer_restored | ffront | 2025-05-02T16:01:18Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ffront/emotion-classifier_v2",
"base_model:finetune:ffront/emotion-classifier_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T16:00:37Z | ---
library_name: transformers
license: apache-2.0
base_model: ffront/emotion-classifier_v2
tags:
- generated_from_trainer
model-index:
- name: final_classifaer_restored
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_classifaer_restored
This model is a fine-tuned version of [ffront/emotion-classifier_v2](https://huggingface.co/ffront/emotion-classifier_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0457
- eval_model_preparation_time: 0.0029
- eval_accuracy: 0.5448
- eval_f1_score_micro: 0.5448
- eval_f1_score_macro: 0.5450
- eval_precision: 0.5448
- eval_recall: 0.5448
- eval_runtime: 8.8405
- eval_samples_per_second: 282.789
- eval_steps_per_second: 35.405
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
jmpi5/Reinforce-CartPole-v1 | jmpi5 | 2025-05-02T16:01:10Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-02T16:01:05Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
maxsegan/gpt2_d_spatial_64_0.1_100k | maxsegan | 2025-05-02T15:59:39Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2025-05-02T15:59:14Z | # gpt2_d_spatial_128_0.1_100k
## Model Details
- Block size: 1024
- Vocabulary size: 50304
- Layers: 12
- Heads: 12
- Embedding size: 768
|
mothnaZl/s1-Qwen-Qwen2.5-7B-Instruct-3-32768 | mothnaZl | 2025-05-02T15:53:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T14:56:58Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: s1-Qwen-Qwen2.5-7B-Instruct-3-32768
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for s1-Qwen-Qwen2.5-7B-Instruct-3-32768
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mothnaZl/s1-Qwen-Qwen2.5-7B-Instruct-3-32768", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mothnazhong-hong-kong-university-of-science-and-technology/s1/runs/1ldqdue6)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cnmoro/Qwen3-0.6B-Portuguese-Tokenizer | cnmoro | 2025-05-02T15:48:49Z | 0 | 1 | null | [
"pt",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T15:38:36Z | ---
license: apache-2.0
language:
- pt
base_model:
- Qwen/Qwen3-0.6B
---
O Tokenizer do Qwen/Qwen3-0.6B com "chat_template" modificado para forΓ§ar respostas em portuguΓͺs, mesmo que "enable_thinking" seja True ou False.
ForΓ§a para que o reasoning aconteΓ§a em portuguΓͺs tambΓ©m.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3-0.6B", # Modelo original
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(
"cnmoro/Qwen3-0.6B-Portuguese-Tokenizer" # Tokenizer custom
)
# Prepara os inputs
prompt = "Write a very brief introduction to Large Language Models (LLMs)."
messages = [
{"role": "user", "content": prompt}
]
# Reasoning >ATIVADO<
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
if thinking_content: thinking_content = "Ok, o usuΓ‘rio " + thinking_content.replace("</think>", "")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
# thinking content: Ok, o usuΓ‘rio quer uma introduΓ§Γ£o muito breve sobre os modelos de linguagem natural (LLMs).
# Preciso ser especΓfico, mas conciso. Devo mencionar o que sΓ£o, os tipos e as aplicaΓ§Γ΅es.
# Tenho que ser breve, como um intro para um artigo ou um texto.
# O que devo dizer? Devo comeΓ§ar com o que Γ© LLMs, depois os tipos, e finalmente as aplicaΓ§Γ΅es.
# Devo usar frases simples e diretas, sem complexidade. Preciso garantir que o intro seja breve, como um parΓ‘grafo.
# Vou testar isso. Vou chamar de "uma tecnologia de linguagem que permite a criaΓ§Γ£o e interaΓ§Γ£o com textos complexos".
# Deixar de repetir o que jΓ‘ disse.
# O que devo dizer agora? Okay, vou escrever o intro.
print("content:", content)
# content: Large Language Models (LLMs) sΓ£o modelos de linguagem que permitem a criaΓ§Γ£o e interaΓ§Γ£o
# com textos complexos, adaptando-se a diferentes contextos e usos, sendo aplicaΓ§Γ΅es em Γ‘reas como
# inteligΓͺncia artificial, comunicaΓ§Γ£o com humanos e automaΓ§Γ£o.
###### ---------------------------------- ######
# Reasoning >DESATIVADO<
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
if thinking_content: thinking_content = "Ok, o usuΓ‘rio " + thinking_content.replace("</think>", "")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
#
print("content:", content)
# content: "Large Language Models (LLMs) sΓ£o modelos de linguagem de grande escala que permitem a
# geraΓ§Γ£o de textos com base em padrΓ΅es e conhecimento humano."
``` |
dgambettaphd/M_llm2_gen1_S_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-02T15:45:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:45:02Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xtallet/llama381binstruct_summarize_short_merged | xtallet | 2025-05-02T15:41:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-02T15:34:15Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bruhzair/ignore-merge-1 | bruhzair | 2025-05-02T15:39:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:10:17Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# euryale2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 4]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [2, 4]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [4, 8]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [6, 8]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 12]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [10, 12]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [12, 16]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [14, 16]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [16, 20]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [18, 20]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [20, 24]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [22, 24]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 28]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [26, 28]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [28, 32]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [30, 32]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [32, 36]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [34, 36]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [36, 40]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [38, 40]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [40, 44]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [42, 44]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [44, 48]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [46, 48]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [48, 52]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [50, 52]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [52, 56]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [54, 56]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [56, 60]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [58, 60]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [60, 64]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [62, 64]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [64, 68]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [66, 68]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [68, 72]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [70, 72]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [72, 76]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [74, 76]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [76, 80]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
- sources:
- layer_range: [78, 80]
model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
ail-sa/akshey_stockyplus_mid_fs_v3 | ail-sa | 2025-05-02T15:37:17Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T15:05:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Akshey_Stockyplus_Mid_Fs_V3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/akshey_stockyplus_mid_fs_v3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/akshey_stockyplus_mid_fs_v3', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/akshey_stockyplus_mid_fs_v3/discussions) to add images that show off what youβve made with this LoRA.
|
NadirFartas/araT5-qg-final | NadirFartas | 2025-05-02T15:36:32Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T15:23:13Z | ---
license: apache-2.0
---
|
ibm-granite/granite-3.3-2b-base-GGUF | ibm-granite | 2025-05-02T15:32:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.3",
"text-generation",
"base_model:ibm-granite/granite-3.3-2b-base",
"base_model:quantized:ibm-granite/granite-3.3-2b-base",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-02T14:50:10Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.3
- gguf
base_model:
- ibm-granite/granite-3.3-2b-base
---
> [!NOTE]
> This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
>
> Please reference the base model's full model card here:
> https://huggingface.co/ibm-granite/granite-3.3-2b-base |
KSJcompany/LLM-assignment1-KoBERT-1 | KSJcompany | 2025-05-02T15:30:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:28:21Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KSJcompany
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sandman4/Qwen3-32B-GPTQ-8bit | sandman4 | 2025-05-02T15:30:07Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"8-bit",
"gptq",
"region:us"
] | null | 2025-05-02T15:14:50Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-32B
---
# Qwen3-32B Quantized Model
8-bit quantized version of [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) using gptqmodel.
## Quantization
```python
from datasets import load_dataset
from gptqmodel import GPTQModel, QuantizeConfig
import sys
model_id = sys.argv[1]
quant_path = "quantized_model"
# Load calibration data (1024 samples from C4)
calibration_dataset = load_dataset(
"allenai/c4",
data_files="en/c4-train.00001-of-01024.json.gz",
split="train"
).select(range(1024))["text"]
# Configure and run quantization
quant_config = QuantizeConfig(bits=8, group_size=128)
model = GPTQModel.load(model_id, quant_config)
model.quantize(calibration_dataset, batch_size=2)
model.save(quant_path)
```
## License
Apache-v2. See LICENSE.txt |
defk0n1/p2m_v4_adapters | defk0n1 | 2025-05-02T15:29:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:29:43Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** defk0n1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdoCleanCode/real_model_ag_news_v6 | AdoCleanCode | 2025-05-02T15:28:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:41:30Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: real_model_ag_news_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# real_model_ag_news_v6
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2618 | 1.0 | 5100 | 3.0942 |
| 3.0766 | 2.0 | 10200 | 3.0073 |
| 2.9453 | 3.0 | 15300 | 2.9701 |
| 2.885 | 4.0 | 20400 | 2.9518 |
| 2.8458 | 5.0 | 25500 | 2.9464 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
chchen/Llama3-OpenBioLLM-8B-PsyCourse-info-fold4 | chchen | 2025-05-02T15:26:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:adapter:aaditya/Llama3-OpenBioLLM-8B",
"license:llama3",
"region:us"
] | null | 2025-05-02T14:29:18Z | ---
library_name: peft
license: llama3
base_model: aaditya/Llama3-OpenBioLLM-8B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama3-OpenBioLLM-8B-PsyCourse-info-fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3-OpenBioLLM-8B-PsyCourse-info-fold4
This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-info-train-fold4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4899 | 0.3951 | 10 | 0.4463 |
| 0.2516 | 0.7901 | 20 | 0.2464 |
| 0.1562 | 1.1852 | 30 | 0.1998 |
| 0.1462 | 1.5802 | 40 | 0.1836 |
| 0.1466 | 1.9753 | 50 | 0.1673 |
| 0.0851 | 2.3704 | 60 | 0.1627 |
| 0.0661 | 2.7654 | 70 | 0.1566 |
| 0.0802 | 3.1605 | 80 | 0.1581 |
| 0.0617 | 3.5556 | 90 | 0.1561 |
| 0.0708 | 3.9506 | 100 | 0.1569 |
| 0.0713 | 4.3457 | 110 | 0.1578 |
| 0.088 | 4.7407 | 120 | 0.1588 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Bohemianx3/MyModelPriva | Bohemianx3 | 2025-05-02T15:25:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T15:25:22Z | ---
license: apache-2.0
---
|
toilahonganh1712/Meta-Llama-3.1-8B-q4-Travel-VungTau-LORA | toilahonganh1712 | 2025-05-02T15:22:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:22:18Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** toilahonganh1712
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF | mradermacher | 2025-05-02T15:22:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"base_model:quantized:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-02T07:16:29Z | ---
base_model: mergekit-community/Phi-4-reasoning-Line-14b-karcher
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/Phi-4-reasoning-Line-14b-karcher
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ1_S.gguf) | i1-IQ1_S | 3.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ1_M.gguf) | i1-IQ1_M | 3.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ2_S.gguf) | i1-IQ2_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ2_M.gguf) | i1-IQ2_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q2_K.gguf) | i1-Q2_K | 5.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ3_S.gguf) | i1-IQ3_S | 6.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q4_0.gguf) | i1-Q4_0 | 8.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q4_1.gguf) | i1-Q4_1 | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.i1-Q6_K.gguf) | i1-Q6_K | 12.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF | mradermacher | 2025-05-02T15:22:24Z | 61 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"base_model:quantized:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T21:29:29Z | ---
base_model: mergekit-community/Phi-4-reasoning-Line-14b-karcher
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/Phi-4-reasoning-Line-14b-karcher
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q2_K.gguf) | Q2_K | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.IQ4_XS.gguf) | IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q5_K_M.gguf) | Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MrDragonFox/baddy_S2_EXP_3 | MrDragonFox | 2025-05-02T15:21:04Z | 0 | 0 | null | [
"safetensors",
"llama",
"unsloth",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-05-02T11:49:42Z | ---
license: cc-by-nc-4.0
tags:
- unsloth
---
(m)orpheus t(i)t(t)s
- Uncensored Orpheus tts
finetune of orpheus on uncensored/ (un)alingned data to be able to generate more interresting sounds
SEASION2 - Experiment 3
speaker name is "baddy" - trained on base
prob. final checkpoint for the time beeing
seems to even work with voice cloneing fine if you keep the speaker as baddy
bug reports / recommendations please in the discord https://discord.gg/RUs3uzBdW3
training still under way
does less tags but generalise rather well |
hadidev/finetuning-sentiment-uselection2024 | hadidev | 2025-05-02T15:19:04Z | 1 | 0 | null | [
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-04-27T20:07:10Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
--- |
SeungWonSeo/baseline_without_shape | SeungWonSeo | 2025-05-02T15:18:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:10:50Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# Qwen2.5-Coder-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [π blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
MinaMila/phi3_unlearned_LoRa_ACSEmployment_2_ep8_22 | MinaMila | 2025-05-02T15:09:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:09:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alfarizisalman/org | alfarizisalman | 2025-05-02T15:09:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T15:09:18Z | ---
license: apache-2.0
---
|
psyonp/Final-Llama-Question-TTR | psyonp | 2025-05-02T15:08:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:01:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ainnurani/MODEL_REPO_NAME | ainnurani | 2025-05-02T12:27:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T12:25:34Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ainnurani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JavaEdge/qwen-rotten-tomatoes | JavaEdge | 2025-05-02T12:26:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T12:06:36Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: qwen-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF | ykarout | 2025-05-02T12:20:55Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ykarout/phi4-deepseek-r1-0205-16bit",
"base_model:quantized:ykarout/phi4-deepseek-r1-0205-16bit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T12:20:14Z | ---
base_model: ykarout/phi4-deepseek-r1-0205-16bit
tags:
- llama-cpp
- gguf-my-repo
---
# ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF
This model was converted to GGUF format from [`ykarout/phi4-deepseek-r1-0205-16bit`](https://huggingface.co/ykarout/phi4-deepseek-r1-0205-16bit) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ykarout/phi4-deepseek-r1-0205-16bit) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -c 2048
```
|
vanwdai/byt5-base-finetuned-nlpaug-ocr | vanwdai | 2025-05-02T12:20:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T12:18:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AnveshSK/ppo-LunarLander-v2 | AnveshSK | 2025-05-02T12:16:10Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-02T12:15:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.87 +/- 11.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ba2han/Qwen-3-14B-Gemini-v0.1 | Ba2han | 2025-05-02T12:11:35Z | 0 | 1 | null | [
"safetensors",
"qwen3",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:mit",
"region:us"
] | null | 2025-05-01T23:32:51Z | ---
license: mit
language:
- en
base_model:
- Qwen/Qwen3-14B
---
> [!NOTE]
> **Use "You are an assistant with reasoning capabilities." system message to trigger gemini-style thinking.**
# Training Dataset
- The fine-tuning dataset consists of ~300 diverse examples, 160 of which are directly from Gemini 2.5 Pro.
# Model
- Trained on unsloth version of Qwen3-14B (instruct).
- No benchmark data for now.
**Keep in mind that it's slightly overfit since the training dataset was quite small. The model can be used to create more high quality examples for further training.**
 |
hemal69/model | hemal69 | 2025-05-02T12:08:58Z | 0 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T12:02:11Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
faifaistone/hcw-ci | faifaistone | 2025-05-02T12:08:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T11:20:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HCW
---
# Hcw Ci
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HCW` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HCW",
"lora_weights": "https://huggingface.co/faifaistone/hcw-ci/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('faifaistone/hcw-ci', weight_name='lora.safetensors')
image = pipeline('HCW').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/faifaistone/hcw-ci/discussions) to add images that show off what youβve made with this LoRA.
|
abdeljalilELmajjodi/model | abdeljalilELmajjodi | 2025-05-02T12:07:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:atlasia/XLM-RoBERTa-Morocco",
"base_model:finetune:atlasia/XLM-RoBERTa-Morocco",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-01-29T06:03:35Z | ---
library_name: transformers
license: mit
base_model: atlasia/XLM-RoBERTa-Morocco
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [atlasia/XLM-RoBERTa-Morocco](https://huggingface.co/atlasia/XLM-RoBERTa-Morocco) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
tarundachepally/granite_8b_5 | tarundachepally | 2025-05-02T12:04:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ibm-granite/granite-8b-code-instruct-128k",
"base_model:finetune:ibm-granite/granite-8b-code-instruct-128k",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T12:04:43Z | ---
base_model: ibm-granite/granite-8b-code-instruct-128k
library_name: transformers
model_name: granite_8b_5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for granite_8b_5
This model is a fine-tuned version of [ibm-granite/granite-8b-code-instruct-128k](https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tarundachepally/granite_8b_5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zelk12/MT1-gemma-3-12B-Q6_K-GGUF | zelk12 | 2025-05-02T12:03:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:zelk12/MT1-gemma-3-12B",
"base_model:quantized:zelk12/MT1-gemma-3-12B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-02T12:03:12Z | ---
base_model: zelk12/MT1-gemma-3-12B
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# zelk12/MT1-gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT1-gemma-3-12B`](https://huggingface.co/zelk12/MT1-gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT1-gemma-3-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT1-gemma-3-12B-Q6_K-GGUF --hf-file mt1-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT1-gemma-3-12B-Q6_K-GGUF --hf-file mt1-gemma-3-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT1-gemma-3-12B-Q6_K-GGUF --hf-file mt1-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT1-gemma-3-12B-Q6_K-GGUF --hf-file mt1-gemma-3-12b-q6_k.gguf -c 2048
```
|
NikG100/named-entity-recognition-for-tagging-news-articles | NikG100 | 2025-05-02T12:02:17Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-02T12:01:25Z | # RoBERTa-Base Quantized Model for Named Entity Recognition (NER)
This repository contains a quantized version of the RoBERTa model fine-tuned for Named Entity Recognition (NER) on the WikiANN (English) dataset. The model is particularly suitable for **tagging named entities in news articles**, such as persons, organizations, and locations. It has been optimized for efficient deployment using quantization techniques.
## Model Details
- **Model Architecture:** RoBERTa Base
- **Task:** Named Entity Recognition
- **Dataset:** WikiANN (English)
- **Use Case:** Tagging news articles with named entities
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch
# Load tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
# Create NER pipeline
ner_pipeline = pipeline(
"ner",
model=model,
tokenizer=tokenizer,
aggregation_strategy="simple"
)
# Sample news headline
text = "Apple Inc. is planning to open a new campus in London by the end of 2025."
# Inference
entities = ner_pipeline(text)
# Display results
for ent in entities:
print(f"{ent['word']}: {ent['entity_group']} ({ent['score']:.2f})")
```
## Performance Metrics
- **Accuracy:** 0.923422
- **Precision:** 0.923052
- **Recall:** 0.923422
- **F1:** 0.923150
## Fine-Tuning Details
### Dataset
The dataset is taken from Hugging Face WikiANN (English).
### Training
- Number of epochs: 5
- Batch size: 16
- Evaluation strategy: epoch
- Learning rate: 3e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
βββ config.json
βββ tokenizer_config.json
βββ sepcial_tokens_map.json
βββ tokenizer.json
βββ model.safetensors # Fine Tuned Model
βββ README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
harrykeeran12/radiology_error_mistral_gguf | harrykeeran12 | 2025-05-02T12:01:33Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T19:06:26Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harrykeeran12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits