modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
argmaxinc/speakerkit-pro | argmaxinc | 2025-05-01T20:46:12Z | 0 | 14 | speakerkit | [
"speakerkit",
"pyannote",
"diarization",
"speaker-diarization",
"whisper",
"whisperkit",
"coreml",
"asr",
"quantized",
"automatic-speech-recognition",
"license:other",
"region:us"
] | automatic-speech-recognition | 2024-11-25T21:43:47Z | ---
license: other
license_name: argmax-fmod-license
license_link: https://huggingface.co/argmaxinc/speakerkit-pro/blob/main/LICENSE_NOTICE.txt
pretty_name: SpeakerKit
viewer: false
library_name: speakerkit
tags:
- speakerkit
- pyannote
- diarization
- speaker-diarization
- whisper
- whisperkit
- coreml
- asr
- quantized
- automatic-speech-recognition
extra_gated_heading: Request Access to SpeakerKit Pro (Part of Argmax SDK)
extra_gated_description: >-
SpeakerKit Pro is Argmax's state-of-the-art on-device framework for speaker recognition tasks such as speaker diarization. Please submit your
information below or directly send an
email to [[email protected]](mailto:[email protected]).
extra_gated_fields:
Company: text
Work email: text
I acknowledge the license notice: checkbox
extra_gated_button_content: Submit
---
SpeakerKit Pro
Read the [blog](https://www.argmaxinc.com/blog/speakerkit)
Try it on [TestFlight](https://testflight.apple.com/join/LPVOyJZW)
Read the [Research Paper](http://argmaxinc.com/sdbench-paper) to learn more about the architecture and performance benchmarks
Get access [here](https://www.argmaxinc.com/#request-access)
|
vertings6/c524955f-0511-4565-8f4e-fa52944d1f30 | vertings6 | 2025-05-01T20:45:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T20:21:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c524955f-0511-4565-8f4e-fa52944d1f30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- a39fc32ce6f39928_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a39fc32ce6f39928_train_data.json
type:
field_input: function_description_en
field_instruction: system_message_en
field_output: system_message_vi
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/c524955f-0511-4565-8f4e-fa52944d1f30
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/a39fc32ce6f39928_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 08a0d7e8-68cb-468a-a0ab-a2295a25df82
wandb_project: s56-32
wandb_run: your_name
wandb_runid: 08a0d7e8-68cb-468a-a0ab-a2295a25df82
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c524955f-0511-4565-8f4e-fa52944d1f30
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0002 | 0.0150 | 200 | 0.0001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AI4BD/Bangla-Qwen-Translator-v2.1 | AI4BD | 2025-05-01T20:44:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T20:43:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ItsMaxNorm/live_subject_animal_02_kitten | ItsMaxNorm | 2025-05-01T20:24:52Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-01T20:24:12Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of kitten
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - ItsMaxNorm/live_subject_animal_02_kitten
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of kitten using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
Yuhan123/ppo-synthetic-one-language-100-step-2025-04-02-15-44-14 | Yuhan123 | 2025-05-01T18:11:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T18:08:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thedaz/klue-roberta-base-klue-sts | thedaz | 2025-05-01T18:01:24Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-01T18:00:59Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 657 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mek63/cimbom33 | mek63 | 2025-05-01T17:27:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T17:27:01Z | ---
license: apache-2.0
---
|
kronoscr/tatiana | kronoscr | 2025-05-01T17:26:12Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-04T19:13:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Yuhan123/ppo-synthetic-one-language-2025-04-01-16-13-54 | Yuhan123 | 2025-05-01T17:02:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T17:00:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
harrykeeran12/radiology_error_qwen2.5 | harrykeeran12 | 2025-05-01T16:39:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T18:44:49Z | ---
base_model: unsloth/qwen2.5-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harrykeeran12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unsloth/OLMo-2-0425-1B-Instruct-unsloth-bnb-4bit | unsloth | 2025-05-01T16:38:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"unsloth",
"conversational",
"en",
"dataset:allenai/RLVR-MATH",
"arxiv:2501.00656",
"arxiv:2411.15124",
"base_model:allenai/OLMo-2-0425-1B-Instruct",
"base_model:quantized:allenai/OLMo-2-0425-1B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-01T16:38:32Z | ---
tags:
- unsloth
license: apache-2.0
language:
- en
datasets:
- allenai/RLVR-MATH
base_model:
- allenai/OLMo-2-0425-1B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
<img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
OLMo 2 1B Instruct April 2025 is post-trained variant of the [allenai/OLMo-2-0425-1B-RLVR1](https://huggingface.co/allenai/OLMo-2-0425-1B-RLVR1) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tรผlu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture-0225), further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-0425-1b-preference-mix), and final RLVR training on [this dataset](https://huggingface.co/datasets/allenai/RLVR-MATH).
Tรผlu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tรผlu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs, and associated training details.
## Model description
- **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** allenai/OLMo-2-0425-1B-RLVR1
### Model Sources
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo-core
- Evaluation code: https://github.com/allenai/olmes
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** https://arxiv.org/abs/2501.00656
- **Demo:** https://playground.allenai.org/
## Installation
OLMo 2 1B is supported in transformers v4.48 or higher:
```bash
pip install transformers>=4.48
```
If using vLLM, you will need to install from the main branch until v0.7.4 is released. Please
## Using the model
### Loading with HuggingFace
To load the model with HuggingFace, use the following snippet:
```
from transformers import AutoModelForCausalLM
olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-Instruct")
```
### Chat template
*NOTE: This is different than previous OLMo 2 and Tรผlu 3 models due to a minor change in configuration. It does NOT have the bos token before the rest. Our other models have <|endoftext|> at the beginning of the chat template.*
The chat template for our models is formatted as:
```
<|user|>
How are you doing?
<|assistant|>
I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
Or with new lines expanded:
```
<|user|>
How are you doing?
<|assistant|>
I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
### Intermediate Checkpoints
To facilitate research on RL finetuning, we have released our intermediate checkpoints during the model's RLVR training.
The model weights are saved every 20 training steps, and can be accessible in the revisions of the HuggingFace repository.
For example, you can load with:
```
olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-Instruct", revision="step_200")
```
### Bias, Risks, and Limitations
The OLMo-2 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
## Performance
| Model | Average | AlpacaEval 2 LC | BBH | DROP | GSM8K | IFEval | MATH | MMLU | Safety | PopQA | TruthQA |
|-------|---------|-----------------|-----|------|-------|--------|------|------|--------|-------|---------|
| **OLMo 1B 0724** | 24.4 | 2.4 | 29.9 | 27.9 | 10.8 | 25.3 | 2.2 | 36.6 | 52.0 | 12.1 | 44.3 |
| **SmolLM2 1.7B** | 34.2 | 5.8 | 39.8 | 30.9 | 45.3 | 51.6 | 20.3 | 34.3 | 52.4 | 16.4 | 45.3 |
| **Gemma 3 1B** | 38.3 | 20.4 | 39.4 | 25.1 | 35.0 | 60.6 | 40.3 | 38.9 | 70.2 | 9.6 | 43.8 |
| **Llama 3.1 1B** | 39.3 | 10.1 | 40.2 | 32.2 | 45.4 | 54.0 | 21.6 | 46.7 | 87.2 | 13.8 | 41.5 |
| **Qwen 2.5 1.5B** | 41.7 | 7.4 | 45.8 | 13.4 | 66.2 | 44.2 | 40.6 | 59.7 | 77.6 | 15.5 | 46.5 |
| **---** | | | | | | | | | | | |
| **OLMo 2 1B SFT** | 36.9 | 2.4 | 32.8 | 33.8 | 52.1 | 50.5 | 13.2 | 36.4 | 93.2 | 12.7 | 42.1 |
| **OLMo 2 1B DPO** | 40.6 | 9.5 | 33.0 | 34.5 | 59.0 | 67.1 | 14.1 | 39.9 | 89.9 | 12.3 | 46.4 |
| **OLMo 2 1B** | 42.7 | 9.1 | 35.0 | 34.6 | 68.3 | 70.1 | 20.7 | 40.0 | 87.6 | 12.9 | 48.7 |
## License and use
OLMo 2 is licensed under the Apache 2.0 license.
OLMo 2 is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
## Citation
```bibtex
@article{olmo20242olmo2furious,
title={2 OLMo 2 Furious},
author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
year={2024},
eprint={2501.00656},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.00656},
}
``` |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.700 | Yuhan123 | 2025-05-01T16:37:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T16:34:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unsloth/OLMo-2-0425-1B-Instruct-GGUF | unsloth | 2025-05-01T16:35:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"olmo2",
"text-generation",
"unsloth",
"en",
"dataset:allenai/RLVR-MATH",
"arxiv:2501.00656",
"arxiv:2411.15124",
"base_model:allenai/OLMo-2-0425-1B-Instruct",
"base_model:quantized:allenai/OLMo-2-0425-1B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-01T16:29:41Z | ---
tags:
- unsloth
license: apache-2.0
language:
- en
datasets:
- allenai/RLVR-MATH
base_model:
- allenai/OLMo-2-0425-1B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
<img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
OLMo 2 1B Instruct April 2025 is post-trained variant of the [allenai/OLMo-2-0425-1B-RLVR1](https://huggingface.co/allenai/OLMo-2-0425-1B-RLVR1) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tรผlu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture-0225), further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-0425-1b-preference-mix), and final RLVR training on [this dataset](https://huggingface.co/datasets/allenai/RLVR-MATH).
Tรผlu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tรผlu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs, and associated training details.
## Model description
- **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** allenai/OLMo-2-0425-1B-RLVR1
### Model Sources
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo-core
- Evaluation code: https://github.com/allenai/olmes
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** https://arxiv.org/abs/2501.00656
- **Demo:** https://playground.allenai.org/
## Installation
OLMo 2 1B is supported in transformers v4.48 or higher:
```bash
pip install transformers>=4.48
```
If using vLLM, you will need to install from the main branch until v0.7.4 is released. Please
## Using the model
### Loading with HuggingFace
To load the model with HuggingFace, use the following snippet:
```
from transformers import AutoModelForCausalLM
olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-Instruct")
```
### Chat template
*NOTE: This is different than previous OLMo 2 and Tรผlu 3 models due to a minor change in configuration. It does NOT have the bos token before the rest. Our other models have <|endoftext|> at the beginning of the chat template.*
The chat template for our models is formatted as:
```
<|user|>
How are you doing?
<|assistant|>
I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
Or with new lines expanded:
```
<|user|>
How are you doing?
<|assistant|>
I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
### Intermediate Checkpoints
To facilitate research on RL finetuning, we have released our intermediate checkpoints during the model's RLVR training.
The model weights are saved every 20 training steps, and can be accessible in the revisions of the HuggingFace repository.
For example, you can load with:
```
olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-Instruct", revision="step_200")
```
### Bias, Risks, and Limitations
The OLMo-2 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
## Performance
| Model | Average | AlpacaEval 2 LC | BBH | DROP | GSM8K | IFEval | MATH | MMLU | Safety | PopQA | TruthQA |
|-------|---------|-----------------|-----|------|-------|--------|------|------|--------|-------|---------|
| **OLMo 1B 0724** | 24.4 | 2.4 | 29.9 | 27.9 | 10.8 | 25.3 | 2.2 | 36.6 | 52.0 | 12.1 | 44.3 |
| **SmolLM2 1.7B** | 34.2 | 5.8 | 39.8 | 30.9 | 45.3 | 51.6 | 20.3 | 34.3 | 52.4 | 16.4 | 45.3 |
| **Gemma 3 1B** | 38.3 | 20.4 | 39.4 | 25.1 | 35.0 | 60.6 | 40.3 | 38.9 | 70.2 | 9.6 | 43.8 |
| **Llama 3.1 1B** | 39.3 | 10.1 | 40.2 | 32.2 | 45.4 | 54.0 | 21.6 | 46.7 | 87.2 | 13.8 | 41.5 |
| **Qwen 2.5 1.5B** | 41.7 | 7.4 | 45.8 | 13.4 | 66.2 | 44.2 | 40.6 | 59.7 | 77.6 | 15.5 | 46.5 |
| **---** | | | | | | | | | | | |
| **OLMo 2 1B SFT** | 36.9 | 2.4 | 32.8 | 33.8 | 52.1 | 50.5 | 13.2 | 36.4 | 93.2 | 12.7 | 42.1 |
| **OLMo 2 1B DPO** | 40.6 | 9.5 | 33.0 | 34.5 | 59.0 | 67.1 | 14.1 | 39.9 | 89.9 | 12.3 | 46.4 |
| **OLMo 2 1B** | 42.7 | 9.1 | 35.0 | 34.6 | 68.3 | 70.1 | 20.7 | 40.0 | 87.6 | 12.9 | 48.7 |
## License and use
OLMo 2 is licensed under the Apache 2.0 license.
OLMo 2 is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
## Citation
```bibtex
@article{olmo20242olmo2furious,
title={2 OLMo 2 Furious},
author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
year={2024},
eprint={2501.00656},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.00656},
}
``` |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.786 | Yuhan123 | 2025-05-01T16:05:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T16:02:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Thiago-dias26/NUVVI20 | Thiago-dias26 | 2025-05-01T16:04:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T16:04:32Z | ---
license: apache-2.0
---
|
OnlyCheeini/greesychat-turbo | OnlyCheeini | 2025-05-01T15:54:51Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:OnlyCheeini/greesychat",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-26T10:54:59Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets:
- OnlyCheeini/greesychat
---

# GreesyChat-Turbo AI Model
## Overview
GreesyChat-Turbo is an advanced AI model designed for robust text generation using the LLaMA 3 architecture. This model excels in providing high-quality responses for general conversation, mathematical queries, and more. Itโs perfect for powering chatbots, virtual assistants, and any application requiring intelligent dialogue capabilities.
## Benchmark Results
| Metric | Value |
|--------------------|------------|
| **Perplexity** | 22.5 |
| **Generation Speed** | 75 ms per token |
| **Accuracy** | 70% |
| **Response Time** | 200 ms |
| Metric | GreesyChat-Turbo | Mixtral-8x7b | GPT-4 |
|---------------|------------------|---------------|-------------|
| **Code** | 79.2 | 75.6 | 83.6 |
| **MMLU** | 74.5 | 79.9 | 85.1 |
| **Gms8k** | 89.2 (5) | 88.7 | 94.2 |
## Contact
For support or inquiries, please contact: [[email protected]](mailto:[email protected])
|
dimasik1987/f037dae8-e66d-4d3e-8250-597b6de2070b | dimasik1987 | 2025-05-01T15:54:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T15:51:58Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f037dae8-e66d-4d3e-8250-597b6de2070b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b28d72a27f6c5851_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b28d72a27f6c5851_train_data.json
type:
field_input: query_toks
field_instruction: question
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik1987/f037dae8-e66d-4d3e-8250-597b6de2070b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/b28d72a27f6c5851_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 10b4bba1-67d7-4ecf-8210-a48746d35dda
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 10b4bba1-67d7-4ecf-8210-a48746d35dda
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f037dae8-e66d-4d3e-8250-597b6de2070b
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5999 | 0.2183 | 150 | 0.6425 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duandongsheng/sd-class-butterflies-32 | duandongsheng | 2025-05-01T15:34:33Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-05-01T15:32:42Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('duandongsheng/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
aleegis/bb58934a-a240-4055-b5ed-f5ef8915eb45 | aleegis | 2025-05-01T15:29:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T13:40:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bb58934a-a240-4055-b5ed-f5ef8915eb45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 63a491480b93f510_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/63a491480b93f510_train_data.json
type:
field_instruction: prompt
field_output: best_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/bb58934a-a240-4055-b5ed-f5ef8915eb45
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/63a491480b93f510_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 13712427-fb73-4e43-b93c-61d36776a27f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 13712427-fb73-4e43-b93c-61d36776a27f
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# bb58934a-a240-4055-b5ed-f5ef8915eb45
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Out-Lofara/Out.Lofara.Viral.Video.Link | Out-Lofara | 2025-05-01T12:19:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T12:16:27Z | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://mswds.xyz/full-video">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://mswds.xyz/full-video">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://mswds.xyz/full-video"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
Echo9Zulu/Phi-4-reasoning-int4_asym-gptq-se-ov | Echo9Zulu | 2025-05-01T11:50:17Z | 0 | 0 | null | [
"openvino",
"phi3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T11:21:54Z | ---
license: apache-2.0
---
|
Triangle104/Phi-4-mini-reasoning-Q5_K_M-GGUF | Triangle104 | 2025-05-01T11:48:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"math",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-4-mini-reasoning",
"base_model:quantized:microsoft/Phi-4-mini-reasoning",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-01T11:43:20Z | ---
base_model: microsoft/Phi-4-mini-reasoning
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- math
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: How to solve 3*x^2+4*x+5=1?
---
# Triangle104/Phi-4-mini-reasoning-Q5_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-4-mini-reasoning`](https://huggingface.co/microsoft/Phi-4-mini-reasoning) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-4-mini-reasoning) for more details on the model.
---
Phi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data further finetuned for more advanced math reasoning capabilities. The model belongs to the Phi-4 model family and supports 128K token context length.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -c 2048
```
|
Triangle104/mlabonne_Qwen3-0.6B-abliterated-4_K_M-GGUF | Triangle104 | 2025-05-01T11:21:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:mlabonne/Qwen3-0.6B-abliterated",
"base_model:quantized:mlabonne/Qwen3-0.6B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T11:21:32Z | ---
base_model: mlabonne/Qwen3-0.6B-abliterated
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-0.6B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`mlabonne/Qwen3-0.6B-abliterated`](https://huggingface.co/mlabonne/Qwen3-0.6B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mlabonne/Qwen3-0.6B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-0.6B-abliterated-Q4_K_M-GGUF --hf-file qwen3-0.6b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-0.6B-abliterated-Q4_K_M-GGUF --hf-file qwen3-0.6b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-0.6B-abliterated-Q4_K_M-GGUF --hf-file qwen3-0.6b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-0.6B-abliterated-Q4_K_M-GGUF --hf-file qwen3-0.6b-abliterated-q4_k_m.gguf -c 2048
```
|
pawan2411/modernbert-ct4a-aug50-cl | pawan2411 | 2025-05-01T11:21:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-01T09:44:32Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: modernbert-ct4a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-ct4a
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6677
- Accuracy: 0.8856
- F1: 0.7220
- Auc: 0.8155
- Accuracy Per Label: [0.9124087591240876, 0.9051094890510949, 0.8394160583941606]
- F1 Per Label: [0.7692307692307693, 0.7111111111111111, 0.6857142857142857]
- Auc Per Label: [0.8575883575883576, 0.7941787941787942, 0.7946887492861223]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Auc | Accuracy Per Label | F1 Per Label | Auc Per Label |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------------------------------------------------------------:|:-------------------------------------------------------------:|:------------------------------------------------------------:|
| 0.2632 | 1.0 | 720 | 0.3431 | 0.8540 | 0.5798 | 0.7255 | [0.8613138686131386, 0.8686131386861314, 0.8321167883211679] | [0.6122448979591837, 0.47058823529411764, 0.6567164179104478] | [0.7524255024255023, 0.6538461538461539, 0.7701313535122787] |
| 0.1235 | 2.0 | 1440 | 0.2669 | 0.8929 | 0.7449 | 0.8368 | [0.8832116788321168, 0.927007299270073, 0.8686131386861314] | [0.7333333333333333, 0.782608695652174, 0.71875] | [0.8690228690228691, 0.837144837144837, 0.8042547115933752] |
| 0.0365 | 3.0 | 2160 | 0.3926 | 0.8881 | 0.7597 | 0.8662 | [0.8978102189781022, 0.9197080291970803, 0.8467153284671532] | [0.7666666666666667, 0.7924528301886793, 0.72] | [0.8927581427581427, 0.8768191268191268, 0.829097658480868] |
| 0.0186 | 4.0 | 2880 | 0.5401 | 0.8978 | 0.7771 | 0.8725 | [0.9051094890510949, 0.927007299270073, 0.8613138686131386] | [0.7719298245614035, 0.8, 0.759493670886076] | [0.8825363825363826, 0.8665973665973666, 0.8683609366076528] |
| 0.006 | 5.0 | 3600 | 0.5949 | 0.8978 | 0.7547 | 0.8498 | [0.9124087591240876, 0.9051094890510949, 0.8759124087591241] | [0.7931034482758621, 0.6976744186046512, 0.7733333333333333] | [0.9017671517671517, 0.7794525294525294, 0.8682181610508282] |
| 0.0019 | 6.0 | 4320 | 0.8450 | 0.8881 | 0.7252 | 0.8187 | [0.9124087591240876, 0.9051094890510949, 0.8467153284671532] | [0.7777777777777778, 0.7111111111111111, 0.6865671641791045] | [0.8723146223146223, 0.7941787941787942, 0.7896916047972588] |
| 0.0003 | 7.0 | 5040 | 0.7522 | 0.8881 | 0.7177 | 0.8090 | [0.9051094890510949, 0.9051094890510949, 0.8540145985401459] | [0.7450980392156863, 0.7111111111111111, 0.696969696969697] | [0.8383575883575884, 0.7941787941787942, 0.7945459737292976] |
| 0.0 | 8.0 | 5760 | 0.7441 | 0.8856 | 0.7093 | 0.8041 | [0.9124087591240876, 0.8978102189781022, 0.8467153284671532] | [0.7692307692307693, 0.6818181818181818, 0.676923076923077] | [0.8575883575883576, 0.774948024948025, 0.7798400913763565] |
| 0.0 | 9.0 | 6480 | 0.6585 | 0.8881 | 0.7314 | 0.8219 | [0.9124087591240876, 0.9124087591240876, 0.8394160583941606] | [0.7692307692307693, 0.7391304347826086, 0.6857142857142857] | [0.8575883575883576, 0.8134095634095634, 0.7946887492861223] |
| 0.0 | 10.0 | 7200 | 0.6677 | 0.8856 | 0.7220 | 0.8155 | [0.9124087591240876, 0.9051094890510949, 0.8394160583941606] | [0.7692307692307693, 0.7111111111111111, 0.6857142857142857] | [0.8575883575883576, 0.7941787941787942, 0.7946887492861223] |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
ubaitur5/Qwen2.5-0.5B-Instruct-Q3-mlx | ubaitur5 | 2025-05-01T11:05:44Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"mlx",
"mlx-my-repo",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"region:us"
] | text-generation | 2024-12-26T07:34:06Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- chat
- mlx
- mlx-my-repo
library_name: transformers
---
# ubaitur5/Qwen2.5-0.5B-Instruct-Q3-mlx
The Model [ubaitur5/Qwen2.5-0.5B-Instruct-Q3-mlx](https://huggingface.co/ubaitur5/Qwen2.5-0.5B-Instruct-Q3-mlx) was converted to MLX format from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using mlx-lm version **0.20.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ubaitur5/Qwen2.5-0.5B-Instruct-Q3-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
nicolaadrah/physics_cpt_adapter | nicolaadrah | 2025-05-01T10:22:32Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T10:22:18Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nicolaadrah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
khalednabawi11/MedScan-Report-Gen | khalednabawi11 | 2025-05-01T10:20:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-01T10:19:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stokemctoke/Alex-Jones_v01_F1D | stokemctoke | 2025-05-01T10:10:10Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-01T10:07:25Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: 4L3XJ0N35 a man playing chess at the park, bomb going off in the background
output:
url: samples/1746094008143__000003750_0.jpg
- text: 4L3XJ0N35 a man holding a coffee cup, in a beanie, sitting at a cafe
output:
url: samples/1746094024110__000003750_1.jpg
- text: 4L3XJ0N35 a man holding a sign that says, 'Stoke LoRA'
output:
url: samples/1746094040109__000003750_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 4L3XJ0N35
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Alex-Jones_v01_F1D
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `4L3XJ0N35` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/stokemctoke/Alex-Jones_v01_F1D/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('stokemctoke/Alex-Jones_v01_F1D', weight_name='Alex-Jones_v01_F1D.safetensors')
image = pipeline('4L3XJ0N35 a man playing chess at the park, bomb going off in the background').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
vertings6/68ecd706-b48c-415a-be08-d25c932eef87 | vertings6 | 2025-05-01T10:06:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:budecosystem/genz-70b",
"base_model:adapter:budecosystem/genz-70b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T08:38:43Z | ---
library_name: peft
base_model: budecosystem/genz-70b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 68ecd706-b48c-415a-be08-d25c932eef87
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: budecosystem/genz-70b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- bf501704f719a312_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bf501704f719a312_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/68ecd706-b48c-415a-be08-d25c932eef87
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/bf501704f719a312_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0062cdce-f91e-47e2-84bf-0eb3fc593b09
wandb_project: s56-32
wandb_run: your_name
wandb_runid: 0062cdce-f91e-47e2-84bf-0eb3fc593b09
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 68ecd706-b48c-415a-be08-d25c932eef87
This model is a fine-tuned version of [budecosystem/genz-70b](https://huggingface.co/budecosystem/genz-70b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6444 | 0.1464 | 200 | 0.7640 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
GeorgyGUF/Liquid-Metal-sdxl-lora | GeorgyGUF | 2025-05-01T10:01:35Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2025-05-01T09:51:37Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'Liquid_Metal_e000007_00_20250501010601.png'
output:
url: Liquid_Metal_e000007_00_20250501010601.png
- text: 'Liquid_Metal_e000007_01_20250501010617.png'
output:
url: Liquid_Metal_e000007_01_20250501010617.png
- text: ' Liquid_Metal_e000007_02_20250501010633.png'
output:
url: Liquid_Metal_e000007_02_20250501010633.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Dreamy Psychedelic Metallic
---
Source: https://civitai.com/models/1529052/liquid-metal
Training data available here: https://huggingface.co/datasets/GeorgyGUF/Liquid-Metal-sdxl-lora-training-data
Training: Steps: 520 Epochs: 10
Usage Tips: Clip Skip: 1
Trigger Words: Dreamy Psychedelic Metallic |
puhaloferega7/zxczxcv | puhaloferega7 | 2025-05-01T09:51:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T09:51:38Z | ---
license: apache-2.0
---
|
joboffer/659d3f8c-492e-43fd-8dad-cf18ac3b86d9 | joboffer | 2025-05-01T09:25:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-13b-v1.5",
"base_model:adapter:lmsys/vicuna-13b-v1.5",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T09:15:01Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-13b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 659d3f8c-492e-43fd-8dad-cf18ac3b86d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-13b-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aea448971d563c88_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aea448971d563c88_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/659d3f8c-492e-43fd-8dad-cf18ac3b86d9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/aea448971d563c88_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7baf8287-21d3-45a2-9a55-f14342161888
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 7baf8287-21d3-45a2-9a55-f14342161888
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 659d3f8c-492e-43fd-8dad-cf18ac3b86d9
This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1452 | 0.1201 | 200 | 1.1167 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fsgao/fsgao | fsgao | 2025-05-01T09:24:03Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-05-01T09:24:03Z | ---
license: artistic-2.0
---
|
funcFailer0/gemma-for-rec | funcFailer0 | 2025-05-01T09:17:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T02:03:13Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-for-rec
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-for-rec
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="funcFailer0/gemma-for-rec", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
emmuelgojic/cvdbvcvb | emmuelgojic | 2025-05-01T06:09:28Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-01T06:09:28Z | ---
license: bigscience-openrail-m
---
|
PhoebeHarte/PhoebeHarte | PhoebeHarte | 2025-05-01T06:08:33Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T06:08:29Z | ---
license: apache-2.0
---
|
KSJcompany/LLM-assignment1-KoBERT | KSJcompany | 2025-05-01T05:58:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T05:56:08Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KSJcompany
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infogeo/cfa3ecaa-f6a0-47c5-91d1-fe5637506368 | infogeo | 2025-05-01T05:50:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T05:40:00Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cfa3ecaa-f6a0-47c5-91d1-fe5637506368
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- fa1db5c5b576d7cf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fa1db5c5b576d7cf_train_data.json
type:
field_input: span_labels
field_instruction: source_text
field_output: target_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/cfa3ecaa-f6a0-47c5-91d1-fe5637506368
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/fa1db5c5b576d7cf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 415d1e52-e681-4ffd-ba97-801cc10bb890
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 415d1e52-e681-4ffd-ba97-801cc10bb890
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cfa3ecaa-f6a0-47c5-91d1-fe5637506368
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5768 | 0.0061 | 150 | 0.5791 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
silverleons/CMO | silverleons | 2025-05-01T05:44:47Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-01T05:44:46Z | ---
license: bigscience-bloom-rail-1.0
---
|
metaverseinteriordesigner/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_solitary_butterfly | metaverseinteriordesigner | 2025-05-01T02:56:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slithering solitary butterfly",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-10T13:19:46Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_solitary_butterfly
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slithering solitary butterfly
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_solitary_butterfly
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="metaverseinteriordesigner/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slithering_solitary_butterfly", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
binhphap5/Qwen2.5-3b-vi_gsm8k-grpo | binhphap5 | 2025-05-01T02:19:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T11:46:36Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** binhphap5
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zhuyiyun1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_lanky_gecko | zhuyiyun1 | 2025-05-01T02:08:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am foxy lanky gecko",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T00:48:17Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_lanky_gecko
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am foxy lanky gecko
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_lanky_gecko
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zhuyiyun1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foxy_lanky_gecko", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rbelanec/train_wsc_1745950298 | rbelanec | 2025-05-01T02:02:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | 2025-04-30T17:40:27Z | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-it
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_wsc_1745950298
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wsc_1745950298
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the wsc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2398
- Num Input Tokens Seen: 14005200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 0.2502 | 1.6024 | 200 | 0.2398 | 70208 |
| 0.2243 | 3.2008 | 400 | 0.2570 | 140304 |
| 0.2314 | 4.8032 | 600 | 0.2445 | 210336 |
| 0.2246 | 6.4016 | 800 | 0.2456 | 280224 |
| 0.2238 | 8.0 | 1000 | 0.2563 | 350448 |
| 0.2056 | 9.6024 | 1200 | 0.3039 | 420560 |
| 0.218 | 11.2008 | 1400 | 0.3033 | 490880 |
| 0.2243 | 12.8032 | 1600 | 0.2909 | 560560 |
| 0.228 | 14.4016 | 1800 | 0.2976 | 630816 |
| 0.2312 | 16.0 | 2000 | 0.3352 | 699936 |
| 0.256 | 17.6024 | 2200 | 0.3305 | 769520 |
| 0.1819 | 19.2008 | 2400 | 0.5937 | 839648 |
| 0.158 | 20.8032 | 2600 | 0.7600 | 910080 |
| 0.1106 | 22.4016 | 2800 | 1.2361 | 979504 |
| 0.1991 | 24.0 | 3000 | 1.0813 | 1049392 |
| 0.1846 | 25.6024 | 3200 | 1.5614 | 1119904 |
| 0.1735 | 27.2008 | 3400 | 2.3810 | 1189264 |
| 0.1509 | 28.8032 | 3600 | 2.0245 | 1259520 |
| 0.0021 | 30.4016 | 3800 | 3.0666 | 1329408 |
| 0.0929 | 32.0 | 4000 | 3.0413 | 1399696 |
| 0.0981 | 33.6024 | 4200 | 3.5872 | 1470240 |
| 0.0002 | 35.2008 | 4400 | 3.5883 | 1539536 |
| 0.0102 | 36.8032 | 4600 | 3.9757 | 1610032 |
| 0.3213 | 38.4016 | 4800 | 4.2087 | 1680240 |
| 0.0963 | 40.0 | 5000 | 4.1447 | 1749472 |
| 0.0002 | 41.6024 | 5200 | 4.0717 | 1819376 |
| 0.0 | 43.2008 | 5400 | 4.1688 | 1889616 |
| 0.0 | 44.8032 | 5600 | 4.2851 | 1959536 |
| 0.0 | 46.4016 | 5800 | 4.2626 | 2028864 |
| 0.0002 | 48.0 | 6000 | 3.9931 | 2099424 |
| 0.0 | 49.6024 | 6200 | 4.0036 | 2169376 |
| 0.0 | 51.2008 | 6400 | 4.0874 | 2239408 |
| 0.0 | 52.8032 | 6600 | 4.1775 | 2309472 |
| 0.0 | 54.4016 | 6800 | 4.4232 | 2380032 |
| 0.0 | 56.0 | 7000 | 4.3323 | 2449376 |
| 0.1357 | 57.6024 | 7200 | 2.3013 | 2519776 |
| 0.0004 | 59.2008 | 7400 | 3.9364 | 2589392 |
| 0.0 | 60.8032 | 7600 | 4.5112 | 2659792 |
| 0.0002 | 62.4016 | 7800 | 4.4699 | 2729184 |
| 0.0 | 64.0 | 8000 | 4.7731 | 2799504 |
| 0.0 | 65.6024 | 8200 | 4.6935 | 2869520 |
| 0.0002 | 67.2008 | 8400 | 4.7713 | 2940080 |
| 0.0 | 68.8032 | 8600 | 4.9666 | 3010256 |
| 0.0 | 70.4016 | 8800 | 5.0120 | 3080304 |
| 0.0 | 72.0 | 9000 | 5.0390 | 3150464 |
| 0.0 | 73.6024 | 9200 | 5.0681 | 3220512 |
| 0.0 | 75.2008 | 9400 | 5.0208 | 3290320 |
| 0.0 | 76.8032 | 9600 | 5.0913 | 3360352 |
| 0.0 | 78.4016 | 9800 | 5.1181 | 3430416 |
| 0.0 | 80.0 | 10000 | 5.1148 | 3500544 |
| 0.0 | 81.6024 | 10200 | 5.1373 | 3570432 |
| 0.0 | 83.2008 | 10400 | 5.1854 | 3640832 |
| 0.0 | 84.8032 | 10600 | 5.1791 | 3710480 |
| 0.0 | 86.4016 | 10800 | 5.1904 | 3780368 |
| 0.0 | 88.0 | 11000 | 5.2121 | 3850720 |
| 0.0 | 89.6024 | 11200 | 5.2214 | 3920848 |
| 0.0 | 91.2008 | 11400 | 5.1889 | 3990784 |
| 0.0 | 92.8032 | 11600 | 5.2617 | 4060432 |
| 0.0 | 94.4016 | 11800 | 5.2567 | 4130528 |
| 0.0 | 96.0 | 12000 | 5.3243 | 4200848 |
| 0.0 | 97.6024 | 12200 | 5.3238 | 4270928 |
| 0.0 | 99.2008 | 12400 | 5.3268 | 4339920 |
| 0.0 | 100.8032 | 12600 | 5.3216 | 4410624 |
| 0.0 | 102.4016 | 12800 | 5.3369 | 4479904 |
| 0.0 | 104.0 | 13000 | 5.3556 | 4549824 |
| 0.0 | 105.6024 | 13200 | 5.3621 | 4620128 |
| 0.0 | 107.2008 | 13400 | 5.4462 | 4690352 |
| 0.0 | 108.8032 | 13600 | 5.4229 | 4760256 |
| 0.0 | 110.4016 | 13800 | 5.3623 | 4830144 |
| 0.0 | 112.0 | 14000 | 5.4414 | 4900080 |
| 0.0 | 113.6024 | 14200 | 5.4651 | 4969936 |
| 0.0 | 115.2008 | 14400 | 5.4911 | 5040096 |
| 0.0 | 116.8032 | 14600 | 5.4978 | 5110288 |
| 0.0 | 118.4016 | 14800 | 5.5403 | 5180208 |
| 0.0 | 120.0 | 15000 | 5.5455 | 5250464 |
| 0.0 | 121.6024 | 15200 | 5.5610 | 5320528 |
| 0.0 | 123.2008 | 15400 | 5.5894 | 5390624 |
| 0.0 | 124.8032 | 15600 | 5.6072 | 5460832 |
| 0.0 | 126.4016 | 15800 | 5.6240 | 5530720 |
| 0.0 | 128.0 | 16000 | 5.6497 | 5600992 |
| 0.0 | 129.6024 | 16200 | 5.6333 | 5672032 |
| 0.0 | 131.2008 | 16400 | 5.6614 | 5740976 |
| 0.0 | 132.8032 | 16600 | 5.6828 | 5811248 |
| 0.0 | 134.4016 | 16800 | 5.6995 | 5881152 |
| 0.0 | 136.0 | 17000 | 5.7738 | 5951136 |
| 0.0 | 137.6024 | 17200 | 5.7470 | 6021136 |
| 0.0 | 139.2008 | 17400 | 5.7591 | 6091696 |
| 0.0 | 140.8032 | 17600 | 5.7855 | 6161472 |
| 0.0 | 142.4016 | 17800 | 5.8064 | 6231760 |
| 0.0 | 144.0 | 18000 | 5.8327 | 6301232 |
| 0.0 | 145.6024 | 18200 | 5.8848 | 6371776 |
| 0.0 | 147.2008 | 18400 | 5.8775 | 6442048 |
| 0.0 | 148.8032 | 18600 | 5.9053 | 6511680 |
| 0.0 | 150.4016 | 18800 | 5.9010 | 6581136 |
| 0.0 | 152.0 | 19000 | 5.9301 | 6651296 |
| 0.0 | 153.6024 | 19200 | 5.9435 | 6721584 |
| 0.0 | 155.2008 | 19400 | 5.9803 | 6791744 |
| 0.0 | 156.8032 | 19600 | 6.0182 | 6862112 |
| 0.0 | 158.4016 | 19800 | 6.0037 | 6931856 |
| 0.0 | 160.0 | 20000 | 6.0110 | 7001952 |
| 0.0 | 161.6024 | 20200 | 5.9660 | 7071568 |
| 0.0 | 163.2008 | 20400 | 6.0137 | 7141584 |
| 0.0 | 164.8032 | 20600 | 6.0390 | 7212096 |
| 0.0 | 166.4016 | 20800 | 6.0555 | 7282736 |
| 0.0 | 168.0 | 21000 | 6.0948 | 7352288 |
| 0.0 | 169.6024 | 21200 | 6.1164 | 7422624 |
| 0.0 | 171.2008 | 21400 | 6.1387 | 7492496 |
| 0.0 | 172.8032 | 21600 | 6.1157 | 7562288 |
| 0.0 | 174.4016 | 21800 | 6.1460 | 7632432 |
| 0.0 | 176.0 | 22000 | 6.1857 | 7702096 |
| 0.0 | 177.6024 | 22200 | 6.1444 | 7772000 |
| 0.0 | 179.2008 | 22400 | 6.1881 | 7842112 |
| 0.0 | 180.8032 | 22600 | 6.2875 | 7912496 |
| 0.0 | 182.4016 | 22800 | 6.2525 | 7982768 |
| 0.0 | 184.0 | 23000 | 6.2246 | 8052448 |
| 0.0 | 185.6024 | 23200 | 6.2503 | 8122832 |
| 0.0 | 187.2008 | 23400 | 6.2291 | 8193088 |
| 0.0 | 188.8032 | 23600 | 6.2625 | 8263104 |
| 0.0 | 190.4016 | 23800 | 6.2605 | 8333312 |
| 0.0 | 192.0 | 24000 | 6.2397 | 8402848 |
| 0.0 | 193.6024 | 24200 | 6.2157 | 8472688 |
| 0.0 | 195.2008 | 24400 | 6.2733 | 8542528 |
| 0.0 | 196.8032 | 24600 | 6.3027 | 8612928 |
| 0.0 | 198.4016 | 24800 | 6.2369 | 8682896 |
| 0.0 | 200.0 | 25000 | 6.3063 | 8752864 |
| 0.0 | 201.6024 | 25200 | 6.2636 | 8823744 |
| 0.0 | 203.2008 | 25400 | 6.2100 | 8893360 |
| 0.0 | 204.8032 | 25600 | 6.2911 | 8963536 |
| 0.0 | 206.4016 | 25800 | 6.2168 | 9033264 |
| 0.0 | 208.0 | 26000 | 6.2600 | 9102880 |
| 0.0 | 209.6024 | 26200 | 6.2668 | 9173088 |
| 0.0 | 211.2008 | 26400 | 6.2681 | 9242752 |
| 0.0 | 212.8032 | 26600 | 6.2854 | 9313008 |
| 0.0 | 214.4016 | 26800 | 6.2501 | 9382592 |
| 0.0 | 216.0 | 27000 | 6.2807 | 9452912 |
| 0.0 | 217.6024 | 27200 | 6.2134 | 9522896 |
| 0.0 | 219.2008 | 27400 | 6.3790 | 9592864 |
| 0.0 | 220.8032 | 27600 | 6.3640 | 9663568 |
| 0.0 | 222.4016 | 27800 | 6.3814 | 9733504 |
| 0.0 | 224.0 | 28000 | 6.3391 | 9803232 |
| 0.0 | 225.6024 | 28200 | 6.4282 | 9872976 |
| 0.0 | 227.2008 | 28400 | 6.4834 | 9943472 |
| 0.0 | 228.8032 | 28600 | 6.5947 | 10013472 |
| 0.0 | 230.4016 | 28800 | 6.5284 | 10082944 |
| 0.0 | 232.0 | 29000 | 6.6673 | 10153120 |
| 0.0 | 233.6024 | 29200 | 6.6531 | 10223856 |
| 0.0 | 235.2008 | 29400 | 6.7943 | 10293888 |
| 0.0 | 236.8032 | 29600 | 6.8080 | 10363824 |
| 0.0 | 238.4016 | 29800 | 6.8269 | 10433056 |
| 0.0 | 240.0 | 30000 | 6.7854 | 10503136 |
| 0.0 | 241.6024 | 30200 | 6.9273 | 10573568 |
| 0.0 | 243.2008 | 30400 | 6.8975 | 10642912 |
| 0.0 | 244.8032 | 30600 | 6.9270 | 10713264 |
| 0.0 | 246.4016 | 30800 | 6.9037 | 10783152 |
| 0.0 | 248.0 | 31000 | 6.9580 | 10853376 |
| 0.0 | 249.6024 | 31200 | 6.8934 | 10923696 |
| 0.0 | 251.2008 | 31400 | 6.9023 | 10994016 |
| 0.0 | 252.8032 | 31600 | 6.8389 | 11063664 |
| 0.0 | 254.4016 | 31800 | 6.7591 | 11133840 |
| 0.0 | 256.0 | 32000 | 6.7549 | 11203504 |
| 0.0 | 257.6024 | 32200 | 6.8300 | 11273840 |
| 0.0 | 259.2008 | 32400 | 6.7702 | 11342832 |
| 0.0 | 260.8032 | 32600 | 6.7095 | 11412832 |
| 0.0 | 262.4016 | 32800 | 6.7570 | 11482880 |
| 0.0 | 264.0 | 33000 | 6.7268 | 11552512 |
| 0.0 | 265.6024 | 33200 | 6.6205 | 11622560 |
| 0.0 | 267.2008 | 33400 | 6.5914 | 11692336 |
| 0.0 | 268.8032 | 33600 | 6.6435 | 11763296 |
| 0.0 | 270.4016 | 33800 | 6.6254 | 11833168 |
| 0.0 | 272.0 | 34000 | 6.5398 | 11902608 |
| 0.0 | 273.6024 | 34200 | 6.4623 | 11973440 |
| 0.0 | 275.2008 | 34400 | 6.5638 | 12042992 |
| 0.0 | 276.8032 | 34600 | 6.5642 | 12113808 |
| 0.0 | 278.4016 | 34800 | 6.5720 | 12183456 |
| 0.0 | 280.0 | 35000 | 6.5277 | 12253312 |
| 0.0 | 281.6024 | 35200 | 6.5080 | 12323712 |
| 0.0 | 283.2008 | 35400 | 6.4282 | 12393344 |
| 0.0 | 284.8032 | 35600 | 6.5433 | 12463296 |
| 0.0 | 286.4016 | 35800 | 6.5506 | 12533712 |
| 0.0 | 288.0 | 36000 | 6.4980 | 12603312 |
| 0.0 | 289.6024 | 36200 | 6.4744 | 12672944 |
| 0.0 | 291.2008 | 36400 | 6.4789 | 12743584 |
| 0.0 | 292.8032 | 36600 | 6.5051 | 12814000 |
| 0.0 | 294.4016 | 36800 | 6.5353 | 12883584 |
| 0.0 | 296.0 | 37000 | 6.4756 | 12954144 |
| 0.0 | 297.6024 | 37200 | 6.5368 | 13024112 |
| 0.0 | 299.2008 | 37400 | 6.5682 | 13094448 |
| 0.0 | 300.8032 | 37600 | 6.5119 | 13164640 |
| 0.0 | 302.4016 | 37800 | 6.4694 | 13234048 |
| 0.0 | 304.0 | 38000 | 6.5104 | 13304512 |
| 0.0 | 305.6024 | 38200 | 6.5197 | 13374272 |
| 0.0 | 307.2008 | 38400 | 6.4882 | 13444512 |
| 0.0 | 308.8032 | 38600 | 6.5518 | 13514848 |
| 0.0 | 310.4016 | 38800 | 6.4864 | 13584800 |
| 0.0 | 312.0 | 39000 | 6.5067 | 13654928 |
| 0.0 | 313.6024 | 39200 | 6.4883 | 13724752 |
| 0.0 | 315.2008 | 39400 | 6.5242 | 13794224 |
| 0.0 | 316.8032 | 39600 | 6.5555 | 13865104 |
| 0.0 | 318.4016 | 39800 | 6.5335 | 13935776 |
| 0.0 | 320.0 | 40000 | 6.5357 | 14005200 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
SonikSt/REDiDream-GGUF | SonikSt | 2025-04-30T23:26:08Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T20:42:44Z | ---
license: apache-2.0
---
|
enacimie/Qwen3-30B-A3B-Q4_K_M-GGUF | enacimie | 2025-04-30T23:01:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-30T22:37:44Z | ---
base_model: Qwen/Qwen3-30B-A3B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# enacimie/Qwen3-30B-A3B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo enacimie/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo enacimie/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo enacimie/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo enacimie/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -c 2048
```
|
rbelanec/train_wsc_1745950299 | rbelanec | 2025-04-30T21:51:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | 2025-04-30T17:58:17Z | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-it
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_wsc_1745950299
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wsc_1745950299
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the wsc dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9965
- Num Input Tokens Seen: 14005200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 5.8826 | 1.6024 | 200 | 5.5053 | 70208 |
| 5.0763 | 3.2008 | 400 | 5.3567 | 140304 |
| 4.7646 | 4.8032 | 600 | 5.3163 | 210336 |
| 5.7497 | 6.4016 | 800 | 5.3232 | 280224 |
| 5.7576 | 8.0 | 1000 | 5.2744 | 350448 |
| 5.3493 | 9.6024 | 1200 | 5.2395 | 420560 |
| 5.9306 | 11.2008 | 1400 | 5.2913 | 490880 |
| 5.5849 | 12.8032 | 1600 | 5.2287 | 560560 |
| 5.3923 | 14.4016 | 1800 | 5.2059 | 630816 |
| 5.1131 | 16.0 | 2000 | 5.1597 | 699936 |
| 4.9402 | 17.6024 | 2200 | 5.1741 | 769520 |
| 5.4474 | 19.2008 | 2400 | 5.1759 | 839648 |
| 4.8209 | 20.8032 | 2600 | 5.1446 | 910080 |
| 4.9124 | 22.4016 | 2800 | 5.1089 | 979504 |
| 5.2709 | 24.0 | 3000 | 5.1325 | 1049392 |
| 5.278 | 25.6024 | 3200 | 5.0762 | 1119904 |
| 4.916 | 27.2008 | 3400 | 5.1474 | 1189264 |
| 5.1115 | 28.8032 | 3600 | 5.1005 | 1259520 |
| 5.2598 | 30.4016 | 3800 | 5.0810 | 1329408 |
| 5.4014 | 32.0 | 4000 | 5.0811 | 1399696 |
| 5.419 | 33.6024 | 4200 | 5.0911 | 1470240 |
| 5.7328 | 35.2008 | 4400 | 5.0783 | 1539536 |
| 5.2734 | 36.8032 | 4600 | 5.0743 | 1610032 |
| 5.3228 | 38.4016 | 4800 | 5.0611 | 1680240 |
| 5.9158 | 40.0 | 5000 | 5.0856 | 1749472 |
| 5.3068 | 41.6024 | 5200 | 5.0227 | 1819376 |
| 5.1287 | 43.2008 | 5400 | 5.0778 | 1889616 |
| 5.2446 | 44.8032 | 5600 | 5.0547 | 1959536 |
| 5.2095 | 46.4016 | 5800 | 5.0481 | 2028864 |
| 5.2743 | 48.0 | 6000 | 5.0404 | 2099424 |
| 5.1529 | 49.6024 | 6200 | 5.0544 | 2169376 |
| 5.1871 | 51.2008 | 6400 | 5.0362 | 2239408 |
| 5.2363 | 52.8032 | 6600 | 5.0370 | 2309472 |
| 5.5796 | 54.4016 | 6800 | 5.0583 | 2380032 |
| 4.5613 | 56.0 | 7000 | 5.0546 | 2449376 |
| 5.5949 | 57.6024 | 7200 | 5.0837 | 2519776 |
| 5.4713 | 59.2008 | 7400 | 5.1097 | 2589392 |
| 5.0727 | 60.8032 | 7600 | 5.0747 | 2659792 |
| 4.7446 | 62.4016 | 7800 | 5.0783 | 2729184 |
| 5.3469 | 64.0 | 8000 | 5.0736 | 2799504 |
| 4.921 | 65.6024 | 8200 | 5.0933 | 2869520 |
| 5.0852 | 67.2008 | 8400 | 5.0411 | 2940080 |
| 4.6469 | 68.8032 | 8600 | 5.0502 | 3010256 |
| 5.218 | 70.4016 | 8800 | 5.0291 | 3080304 |
| 5.1953 | 72.0 | 9000 | 5.0702 | 3150464 |
| 4.5804 | 73.6024 | 9200 | 5.0236 | 3220512 |
| 4.8164 | 75.2008 | 9400 | 5.0161 | 3290320 |
| 5.5157 | 76.8032 | 9600 | 5.0176 | 3360352 |
| 5.0423 | 78.4016 | 9800 | 5.0560 | 3430416 |
| 4.7418 | 80.0 | 10000 | 5.0621 | 3500544 |
| 4.4244 | 81.6024 | 10200 | 5.0575 | 3570432 |
| 4.9467 | 83.2008 | 10400 | 5.0453 | 3640832 |
| 5.0881 | 84.8032 | 10600 | 5.0475 | 3710480 |
| 5.0995 | 86.4016 | 10800 | 5.0685 | 3780368 |
| 5.0999 | 88.0 | 11000 | 5.0329 | 3850720 |
| 5.4019 | 89.6024 | 11200 | 5.0374 | 3920848 |
| 5.0643 | 91.2008 | 11400 | 5.0753 | 3990784 |
| 5.2435 | 92.8032 | 11600 | 5.0708 | 4060432 |
| 5.0528 | 94.4016 | 11800 | 5.0673 | 4130528 |
| 5.5103 | 96.0 | 12000 | 5.0910 | 4200848 |
| 5.1448 | 97.6024 | 12200 | 5.1100 | 4270928 |
| 5.2059 | 99.2008 | 12400 | 5.1052 | 4339920 |
| 4.6471 | 100.8032 | 12600 | 5.1017 | 4410624 |
| 4.9262 | 102.4016 | 12800 | 5.0293 | 4479904 |
| 5.2129 | 104.0 | 13000 | 5.0363 | 4549824 |
| 5.0756 | 105.6024 | 13200 | 4.9999 | 4620128 |
| 4.8911 | 107.2008 | 13400 | 5.0197 | 4690352 |
| 5.4105 | 108.8032 | 13600 | 5.0017 | 4760256 |
| 4.6367 | 110.4016 | 13800 | 4.9981 | 4830144 |
| 4.9558 | 112.0 | 14000 | 5.0126 | 4900080 |
| 4.8652 | 113.6024 | 14200 | 4.9965 | 4969936 |
| 4.7695 | 115.2008 | 14400 | 5.0050 | 5040096 |
| 4.9551 | 116.8032 | 14600 | 5.0302 | 5110288 |
| 5.1785 | 118.4016 | 14800 | 5.0197 | 5180208 |
| 5.2527 | 120.0 | 15000 | 5.0144 | 5250464 |
| 5.2254 | 121.6024 | 15200 | 5.0178 | 5320528 |
| 5.5968 | 123.2008 | 15400 | 5.0225 | 5390624 |
| 5.219 | 124.8032 | 15600 | 5.0071 | 5460832 |
| 4.4181 | 126.4016 | 15800 | 5.0124 | 5530720 |
| 4.7678 | 128.0 | 16000 | 5.0128 | 5600992 |
| 4.8807 | 129.6024 | 16200 | 5.0184 | 5672032 |
| 4.771 | 131.2008 | 16400 | 5.0164 | 5740976 |
| 4.8087 | 132.8032 | 16600 | 5.0120 | 5811248 |
| 4.7813 | 134.4016 | 16800 | 5.0046 | 5881152 |
| 5.5101 | 136.0 | 17000 | 5.0140 | 5951136 |
| 4.8141 | 137.6024 | 17200 | 5.0294 | 6021136 |
| 5.2025 | 139.2008 | 17400 | 5.0068 | 6091696 |
| 4.9835 | 140.8032 | 17600 | 5.0054 | 6161472 |
| 4.9103 | 142.4016 | 17800 | 5.0068 | 6231760 |
| 5.8432 | 144.0 | 18000 | 5.0100 | 6301232 |
| 5.6101 | 145.6024 | 18200 | 5.0059 | 6371776 |
| 5.0518 | 147.2008 | 18400 | 5.0231 | 6442048 |
| 5.0497 | 148.8032 | 18600 | 5.0045 | 6511680 |
| 4.5987 | 150.4016 | 18800 | 5.0037 | 6581136 |
| 5.5221 | 152.0 | 19000 | 5.0084 | 6651296 |
| 5.1569 | 153.6024 | 19200 | 5.0084 | 6721584 |
| 5.0575 | 155.2008 | 19400 | 5.0120 | 6791744 |
| 5.2444 | 156.8032 | 19600 | 5.0055 | 6862112 |
| 4.7524 | 158.4016 | 19800 | 5.0055 | 6931856 |
| 4.8124 | 160.0 | 20000 | 5.0074 | 7001952 |
| 5.3737 | 161.6024 | 20200 | 5.0105 | 7071568 |
| 4.8858 | 163.2008 | 20400 | 5.0051 | 7141584 |
| 4.8946 | 164.8032 | 20600 | 5.0105 | 7212096 |
| 4.9381 | 166.4016 | 20800 | 5.0115 | 7282736 |
| 4.8341 | 168.0 | 21000 | 5.0151 | 7352288 |
| 5.3904 | 169.6024 | 21200 | 5.0080 | 7422624 |
| 5.2622 | 171.2008 | 21400 | 5.0105 | 7492496 |
| 5.0821 | 172.8032 | 21600 | 5.0128 | 7562288 |
| 5.4209 | 174.4016 | 21800 | 5.0128 | 7632432 |
| 4.7799 | 176.0 | 22000 | 5.0092 | 7702096 |
| 5.8407 | 177.6024 | 22200 | 5.0092 | 7772000 |
| 5.1688 | 179.2008 | 22400 | 5.0092 | 7842112 |
| 5.2247 | 180.8032 | 22600 | 5.0092 | 7912496 |
| 5.1015 | 182.4016 | 22800 | 5.0129 | 7982768 |
| 5.6092 | 184.0 | 23000 | 5.0129 | 8052448 |
| 5.5411 | 185.6024 | 23200 | 5.0129 | 8122832 |
| 4.979 | 187.2008 | 23400 | 5.0140 | 8193088 |
| 5.157 | 188.8032 | 23600 | 5.0140 | 8263104 |
| 5.009 | 190.4016 | 23800 | 5.0140 | 8333312 |
| 5.591 | 192.0 | 24000 | 5.0140 | 8402848 |
| 5.0195 | 193.6024 | 24200 | 5.0140 | 8472688 |
| 4.8046 | 195.2008 | 24400 | 5.0140 | 8542528 |
| 4.8943 | 196.8032 | 24600 | 5.0140 | 8612928 |
| 5.1195 | 198.4016 | 24800 | 5.0140 | 8682896 |
| 4.5993 | 200.0 | 25000 | 5.0140 | 8752864 |
| 4.9 | 201.6024 | 25200 | 5.0140 | 8823744 |
| 5.1337 | 203.2008 | 25400 | 5.0140 | 8893360 |
| 5.3839 | 204.8032 | 25600 | 5.0140 | 8963536 |
| 4.9969 | 206.4016 | 25800 | 5.0140 | 9033264 |
| 5.2706 | 208.0 | 26000 | 5.0140 | 9102880 |
| 5.072 | 209.6024 | 26200 | 5.0140 | 9173088 |
| 4.8892 | 211.2008 | 26400 | 5.0140 | 9242752 |
| 5.1248 | 212.8032 | 26600 | 5.0140 | 9313008 |
| 5.2002 | 214.4016 | 26800 | 5.0140 | 9382592 |
| 5.1155 | 216.0 | 27000 | 5.0140 | 9452912 |
| 4.5617 | 217.6024 | 27200 | 5.0140 | 9522896 |
| 5.0017 | 219.2008 | 27400 | 5.0140 | 9592864 |
| 5.0964 | 220.8032 | 27600 | 5.0140 | 9663568 |
| 5.1408 | 222.4016 | 27800 | 5.0140 | 9733504 |
| 5.1874 | 224.0 | 28000 | 5.0140 | 9803232 |
| 4.8597 | 225.6024 | 28200 | 5.0140 | 9872976 |
| 5.2342 | 227.2008 | 28400 | 5.0140 | 9943472 |
| 4.9542 | 228.8032 | 28600 | 5.0140 | 10013472 |
| 5.5457 | 230.4016 | 28800 | 5.0140 | 10082944 |
| 5.2678 | 232.0 | 29000 | 5.0140 | 10153120 |
| 5.4961 | 233.6024 | 29200 | 5.0140 | 10223856 |
| 5.5974 | 235.2008 | 29400 | 5.0140 | 10293888 |
| 5.3689 | 236.8032 | 29600 | 5.0140 | 10363824 |
| 5.0799 | 238.4016 | 29800 | 5.0140 | 10433056 |
| 5.4038 | 240.0 | 30000 | 5.0140 | 10503136 |
| 5.5451 | 241.6024 | 30200 | 5.0140 | 10573568 |
| 5.3873 | 243.2008 | 30400 | 5.0140 | 10642912 |
| 5.3173 | 244.8032 | 30600 | 5.0140 | 10713264 |
| 5.2546 | 246.4016 | 30800 | 5.0140 | 10783152 |
| 4.8004 | 248.0 | 31000 | 5.0140 | 10853376 |
| 5.2339 | 249.6024 | 31200 | 5.0140 | 10923696 |
| 5.2339 | 251.2008 | 31400 | 5.0140 | 10994016 |
| 5.6051 | 252.8032 | 31600 | 5.0140 | 11063664 |
| 5.3693 | 254.4016 | 31800 | 5.0140 | 11133840 |
| 5.1762 | 256.0 | 32000 | 5.0140 | 11203504 |
| 5.0229 | 257.6024 | 32200 | 5.0140 | 11273840 |
| 5.1271 | 259.2008 | 32400 | 5.0140 | 11342832 |
| 5.4677 | 260.8032 | 32600 | 5.0140 | 11412832 |
| 4.684 | 262.4016 | 32800 | 5.0140 | 11482880 |
| 4.684 | 264.0 | 33000 | 5.0140 | 11552512 |
| 5.0538 | 265.6024 | 33200 | 5.0140 | 11622560 |
| 5.1218 | 267.2008 | 33400 | 5.0140 | 11692336 |
| 5.2379 | 268.8032 | 33600 | 5.0140 | 11763296 |
| 5.1809 | 270.4016 | 33800 | 5.0140 | 11833168 |
| 5.3555 | 272.0 | 34000 | 5.0140 | 11902608 |
| 5.4007 | 273.6024 | 34200 | 5.0140 | 11973440 |
| 5.1665 | 275.2008 | 34400 | 5.0140 | 12042992 |
| 4.8605 | 276.8032 | 34600 | 5.0140 | 12113808 |
| 5.1055 | 278.4016 | 34800 | 5.0140 | 12183456 |
| 4.3887 | 280.0 | 35000 | 5.0140 | 12253312 |
| 5.1911 | 281.6024 | 35200 | 5.0140 | 12323712 |
| 4.8782 | 283.2008 | 35400 | 5.0140 | 12393344 |
| 5.0216 | 284.8032 | 35600 | 5.0140 | 12463296 |
| 5.3139 | 286.4016 | 35800 | 5.0140 | 12533712 |
| 5.0383 | 288.0 | 36000 | 5.0140 | 12603312 |
| 4.5486 | 289.6024 | 36200 | 5.0140 | 12672944 |
| 4.8665 | 291.2008 | 36400 | 5.0140 | 12743584 |
| 5.4847 | 292.8032 | 36600 | 5.0140 | 12814000 |
| 5.5078 | 294.4016 | 36800 | 5.0140 | 12883584 |
| 4.8833 | 296.0 | 37000 | 5.0140 | 12954144 |
| 5.3515 | 297.6024 | 37200 | 5.0140 | 13024112 |
| 4.9033 | 299.2008 | 37400 | 5.0140 | 13094448 |
| 5.0591 | 300.8032 | 37600 | 5.0140 | 13164640 |
| 5.5834 | 302.4016 | 37800 | 5.0140 | 13234048 |
| 5.2175 | 304.0 | 38000 | 5.0140 | 13304512 |
| 5.1956 | 305.6024 | 38200 | 5.0140 | 13374272 |
| 5.6496 | 307.2008 | 38400 | 5.0140 | 13444512 |
| 5.0242 | 308.8032 | 38600 | 5.0140 | 13514848 |
| 5.3893 | 310.4016 | 38800 | 5.0140 | 13584800 |
| 5.0775 | 312.0 | 39000 | 5.0140 | 13654928 |
| 4.9615 | 313.6024 | 39200 | 5.0140 | 13724752 |
| 4.8723 | 315.2008 | 39400 | 5.0140 | 13794224 |
| 5.1099 | 316.8032 | 39600 | 5.0140 | 13865104 |
| 5.2058 | 318.4016 | 39800 | 5.0140 | 13935776 |
| 5.5803 | 320.0 | 40000 | 5.0140 | 14005200 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mradermacher/OpenThinker2-32B-Uncensored-GGUF | mradermacher | 2025-04-30T21:48:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:Guilherme34/uncensor",
"base_model:nicoboss/OpenThinker2-32B-Uncensored",
"base_model:quantized:nicoboss/OpenThinker2-32B-Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T20:28:50Z | ---
base_model: nicoboss/OpenThinker2-32B-Uncensored
datasets:
- Guilherme34/uncensor
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nicoboss/OpenThinker2-32B-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenThinker2-32B-Uncensored-GGUF/resolve/main/OpenThinker2-32B-Uncensored.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lisabdunlap/testing_lora | lisabdunlap | 2025-04-30T21:44:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T21:24:49Z | ---
base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bocilanomali/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_nimble_cobra | bocilanomali | 2025-04-30T21:10:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wary nimble cobra",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T19:01:04Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_nimble_cobra
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wary nimble cobra
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_nimble_cobra
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bocilanomali/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_nimble_cobra", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
arskvnc22/unsloth-therapistlike_lora | arskvnc22 | 2025-04-30T20:22:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T20:22:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/InternVL3-9B-bf16 | mlx-community | 2025-04-30T19:47:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"mlx",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-1B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-1B-Instruct",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-04-30T19:36:23Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-1B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- mlx
---
# mlx-community/InternVL3-9B-bf16
This model was converted to MLX format from [`models/InternVL3-9B`]() using mlx-vlm version **0.1.25**.
Refer to the [original model card](https://huggingface.co/models/InternVL3-9B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/InternVL3-9B-bf16 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
marialvsantiago/a9a6b403-6fac-4f2b-ab9c-fba1f0297fa8 | marialvsantiago | 2025-04-30T19:44:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T19:34:23Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9a6b403-6fac-4f2b-ab9c-fba1f0297fa8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83b3569a6bcb443f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83b3569a6bcb443f_train_data.json
type:
field_input: documents
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/a9a6b403-6fac-4f2b-ab9c-fba1f0297fa8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/83b3569a6bcb443f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9b28eaba-1bed-48d6-b5ad-afab6f3a2560
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 9b28eaba-1bed-48d6-b5ad-afab6f3a2560
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a9a6b403-6fac-4f2b-ab9c-fba1f0297fa8
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8317 | 0.0426 | 200 | 1.6320 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Yuhan123/ppo-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.356 | Yuhan123 | 2025-04-30T18:19:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:16:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Deepshikha11/backpack_dog | Deepshikha11 | 2025-04-30T18:16:55Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-04-30T16:56:35Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Deepshikha11/backpack_dog
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
vnyaryan/model | vnyaryan | 2025-04-30T18:16:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-30T18:16:06Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vnyaryan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
niklasm222/qwen2.5-3b-grpo-1.75k-gsm8k-prolog-v4.2-rwd1-NEW | niklasm222 | 2025-04-30T18:11:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:10:09Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
btswiki-com-paro-aarti-viral/btswiki.com.7.2.video.link.btswiki.com.paro.aarti.viral.video | btswiki-com-paro-aarti-viral | 2025-04-30T17:48:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T17:47:55Z |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
Who Is Shah Gangu chettri? Gangu chettri is a name thatโs been making rounds on social media and search engines, especially after a certain โviral videoโ started trending. But before jumping to conclusions, itโs essential to separate facts from fiction.
|
HassaanSeeker/llama-3.2-1b-guanco-finetuned-qlora-layerskip | HassaanSeeker | 2025-04-30T17:26:57Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:46:24Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DataScienceWFSR/distilbert-food-hazard-rw | DataScienceWFSR | 2025-04-30T17:15:54Z | 2 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"en",
"arxiv:2504.20703",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"region:us"
] | text-classification | 2025-04-30T10:22:53Z | ---
language:
- en
metrics:
- f1
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
---
# DistilBert Food Hazard Classification Model - Random Word Swapping Augmentation
## Model Details
### Model Description
This model is finetuned on multi-class food hazard text classification using random word swapping augmentation and distilbert-base-uncased.
- **Developed by:** [DataScienceWFSR](https://huggingface.co/DataScienceWFSR)
- **Model type:** Text Classification
- **Language(s) (NLP):** English
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
### Model Sources
- **Repository:** [https://github.com/WFSRDataScience/SemEval2025Task9](https://github.com/WFSRDataScience/SemEval2025Task9)
- **Paper :** [https://arxiv.org/abs/2504.20703](https://arxiv.org/abs/2504.20703)
## How to Get Started With the Model
Use the code below to get started with the model in PyTorch.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import pandas as pd
model, category, augmentation = 'distilbert', 'hazard', 'rw'
repo_id = f"DataScienceWFSR/{model}-food-{category}-{augmentation}"
lb_path = hf_hub_download(repo_id=repo_id, filename=f"labelencoder_{category}.pkl")
lb = pd.read_pickle(lb_path)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForSequenceClassification.from_pretrained(repo_id)
model.eval()
sample = ('Case Number: 039-94 Date Opened: 10/20/1994 Date Closed: 03/06/1995 Recall Class: 1'
' Press Release (Y/N): N Domestic Est. Number: 07188 M Name: PREPARED FOODS Imported '
'Product (Y/N): N Foreign Estab. Number: N/A City: SANTA TERESA State: NM Country: USA'
' Product: HAM, SLICED Problem: BACTERIA Description: LISTERIA '
'Total Pounds Recalled: 3,920 Pounds Recovered: 3,920')
inputs = tokenizer(sample, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
predicted_label = lb.inverse_transform(predictions.numpy())[0]
print(f"The predicted label is: {predicted_label}")
```
## Training Details
### Training Data
Training and Validation data provided by SemEval-2025 Task 9 organizers : `Food Recall Incidents` dataset (only English) [link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/tree/main/data)
### Training Procedure
#### Training Hyperparameters
- batch_size: `32`
- epochs: `10`
- lr_scheduler: `cosine with Restarts`
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
Test data: 997 samples ([link](https://github.com/food-hazard-detection-semeval-2025/food-hazard-detection-semeval-2025.github.io/blob/main/data/incidents_test.csv))
#### Metrics
F<sub>1</sub>-macro
### Results
F<sub>1</sub>-macro scores for each model in the official test set utilizing the `text` field per category and subtasks scores (ST1 and ST2) rounded to 3 decimals. With bold, we indicated the model's specific results.
| Model | hazard-category | product-category | hazard | product | ST1 | ST2 |
|----------------------|----------------:|-----------------:|-------:|--------:|------:|------:|
| BERT<sub>base</sub> | 0.747 | 0.757 | 0.581 | 0.170 | 0.753 | 0.382 |
| BERT<sub>CW</sub> | 0.760 | 0.761 | 0.671 | 0.280 | 0.762 | 0.491 |
| BERT<sub>SR</sub> | 0.770 | 0.754 | 0.666 | 0.275 | 0.764 | 0.478 |
| BERT<sub>RW</sub> | 0.752 | 0.757 | 0.651 | 0.275 | 0.756 | 0.467 |
| DistilBERT<sub>base</sub> | 0.761 | 0.757 | 0.593 | 0.154 | 0.760 | 0.378 |
| DistilBERT<sub>CW</sub> | 0.766 | 0.753 | 0.635 | 0.246 | 0.763 | 0.449 |
| DistilBERT<sub>SR</sub> | 0.756 | 0.759 | 0.644 | 0.240 | 0.763 | 0.448 |
| **DistilBERT<sub>RW</sub>** | **0.749** | **0.747** | **0.647** | **0.261** | **0.753** | **0.462** |
| RoBERTa<sub>base</sub> | 0.760 | 0.753 | 0.579 | 0.123 | 0.755 | 0.356 |
| RoBERTa<sub>CW</sub> | 0.773 | 0.739 | 0.630 | 0.000 | 0.760 | 0.315 |
| RoBERTa<sub>SR</sub> | 0.777 | 0.755 | 0.637 | 0.000 | 0.767 | 0.319 |
| RoBERTa<sub>RW</sub> | 0.757 | 0.611 | 0.615 | 0.000 | 0.686 | 0.308 |
| ModernBERT<sub>base</sub> | 0.781 | 0.745 | 0.667 | 0.275 | 0.769 | 0.485 |
| ModernBERT<sub>CW</sub> | 0.761 | 0.712 | 0.609 | 0.252 | 0.741 | 0.441 |
| ModernBERT<sub>SR</sub> | 0.790 | 0.728 | 0.591 | 0.253 | 0.761 | 0.434 |
| ModernBERT<sub>RW</sub> | 0.761 | 0.751 | 0.629 | 0.237 | 0.759 | 0.440 |
## Technical Specifications
### Compute Infrastructure
#### Hardware
NVIDIA A100 80GB and NVIDIA GeForce RTX 3070 Ti
#### Software
| Library | Version | URL |
|-------------------|--------:|---------------------------------------------------------------------|
| Transformers | 4.49.0 | https://huggingface.co/docs/transformers/index |
| PyTorch | 2.6.0 | https://pytorch.org/ |
| SpaCy | 3.8.4 | https://spacy.io/ |
| Scikit-learn | 1.6.0 | https://scikit-learn.org/stable/ |
| Pandas | 2.2.3 | https://pandas.pydata.org/ |
| Optuna | 4.2.1 | https://optuna.org/ |
| NumPy | 2.0.2 | https://numpy.org/ |
| NLP AUG | 1.1.11 | https://nlpaug.readthedocs.io/en/latest/index.html |
| BeautifulSoup4 | 4.12.3 | https://www.crummy.com/software/BeautifulSoup/bs4/doc/# |
## Citation
**BibTeX:**
For the original paper:
```
@inproceedings{brightcookies-semeval2025-task9,
title="BrightCookies at {S}em{E}val-2025 Task 9: Exploring Data Augmentation for Food Hazard Classification},
author="Papadopoulou, Foteini and Mutlu, Osman and รzen, Neris and van der Velden, Bas H. M. and Hendrickx, Iris and Hรผrriyetoฤlu, Ali",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
For the SemEval2025 Task9:
```
@inproceedings{semeval2025-task9,
title = "{S}em{E}val-2025 Task 9: The Food Hazard Detection Challenge",
author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony and Bakagianni, Juli",
booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
}
```
## Model Card Authors and Contact
Authors: Foteini Papadopoulou, Osman Mutlu, Neris รzen,
Bas H.M. van der Velden, Iris Hendrickx, Ali Hรผrriyetoฤlu
Contact: [email protected] |
secmlr/SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning | secmlr | 2025-04-30T16:56:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:agentica-org/DeepCoder-14B-Preview",
"base_model:finetune:agentica-org/DeepCoder-14B-Preview",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T08:45:24Z | ---
library_name: transformers
license: mit
base_model: agentica-org/DeepCoder-14B-Preview
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWE-BENCH-2000-enriched-reasoning-claude-localization_deepcoder_14b_2000_enriched_reasoning
This model is a fine-tuned version of [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) on the SWE-BENCH-2000-enriched-reasoning-claude-localization dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 48
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
masani/SFT_parity_Qwen2-0.5B_epoch_4_global_step_12 | masani | 2025-04-30T16:30:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T16:28:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
masani/SFT_parity_Qwen2-0.5B_epoch_1_global_step_3 | masani | 2025-04-30T16:25:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T16:24:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/InternVL3-9B-6bit | mlx-community | 2025-04-30T12:09:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"mlx",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"base_model:OpenGVLab/InternVL3-1B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-1B-Instruct",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-04-30T12:08:15Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-1B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
- mlx
---
# mlx-community/InternVL3-9B-6bit
This model was converted to MLX format from [`models/InternVL3-9B`]() using mlx-vlm version **0.1.25**.
Refer to the [original model card](https://huggingface.co/models/InternVL3-9B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/InternVL3-9B-6bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_3x3_mixed-data-V3 | annasoli | 2025-04-30T11:35:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T11:27:23Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuckyLukke/grpo_turn_level_onesided_2_starter_change-700 | LuckyLukke | 2025-04-30T11:31:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T11:28:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ClaMncDexter/gemma-3-test-float16 | ClaMncDexter | 2025-04-30T11:02:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T10:37:57Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ClaMncDexter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maennyn/roberta-amazon-finefood-sentiment6e | maennyn | 2025-04-30T10:46:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T10:45:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrMoriarty/ppo-LunarLander-v2 | PrMoriarty | 2025-04-30T10:16:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-29T17:39:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.81 +/- 17.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
enestaylan/meta-Llama-3.1-8B-Instruct-GRPO-Length-Repetition | enestaylan | 2025-04-30T06:11:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:20:41Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** enestaylan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
filipesantoscv11/88d59005-0232-4218-a70f-21a7c1a2bb3b | filipesantoscv11 | 2025-04-30T06:07:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b",
"base_model:adapter:unsloth/llama-3-8b",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T05:44:00Z | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 88d59005-0232-4218-a70f-21a7c1a2bb3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 8b4ad6b862eb03b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8b4ad6b862eb03b6_train_data.json
type:
field_input: m4a_tags
field_instruction: title
field_output: pseudo_caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/88d59005-0232-4218-a70f-21a7c1a2bb3b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/8b4ad6b862eb03b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1cf62b57-c1c4-4347-ba84-b24782145bd2
wandb_project: s56-6
wandb_run: your_name
wandb_runid: 1cf62b57-c1c4-4347-ba84-b24782145bd2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 88d59005-0232-4218-a70f-21a7c1a2bb3b
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2866 | 0.0157 | 200 | 1.2748 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
GilatToker/Disease_Deberta | GilatToker | 2025-04-30T05:33:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T05:28:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
polyglots/llama-3-8b-si-SWritting-Style-Classification-Codeswitched-100pct-10010 | polyglots | 2025-04-30T05:12:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T05:12:14Z | ---
base_model: unsloth/llama-3-8b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** polyglots
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OPEA/Falcon3-10B-Base-int4-sym-awq-inc | OPEA | 2025-04-30T03:50:01Z | 0 | 0 | null | [
"safetensors",
"llama",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:tiiuae/Falcon3-10B-Base",
"base_model:quantized:tiiuae/Falcon3-10B-Base",
"4-bit",
"awq",
"region:us"
] | null | 2024-12-13T05:55:48Z | ---
datasets:
- NeelNanda/pile-10k
base_model:
- tiiuae/Falcon3-10B-Base
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Falcon3-10B-Base](https://huggingface.co/tiiuae/Falcon3-10B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### INT4 Inference(CPU/HPU/CUDA)
```python
from auto_round import AutoRoundConfig ##must import for auto_round format
from transformers import AutoModelForCausalLM, AutoTokenizer
quantized_model_dir = "OPEA/falcon3-10B-int4-sym-inc"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
device_map="auto",
)
text = "How many r in strawberry? The answer is "
inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False).to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))
text = "How many r in strawberry? The answer is"
##INT4:
"""How many r in strawberry? The answer is 2.
### Additional Questions and Answers
#### 11. **How many r in strawberry?**
**Answer:**
The word "strawberry" contains 2 'r's.
####
"""
##BF16:
"""
How many r in strawberry? The ansnwer is 2.
### 10. **How many r in strawberry?**
**Question:** How many times does the letter 'r' appear in the word "strawberry"?
**Answer:** The letter 'r
**Answer:**
The answer to the riddle"""
"""
text = "Which number is larger, 9.8 or 9.11? The answer is"
##INT4
"""Which number is larger, 9.8 or 9.11? The answer is 9.8.
#### 10. **What is the smallest number in the set {1.2, 1.02, 1.22, 1.002}?**
"""
##BF16:
"""Which number is larger, 9.8 or 9.11? The answer is 9.8.
#### Question 2:
**How do you compare the numbers 12.34 and 12.345?**
**Answer:**
To compare 12.34"""
text = "Once upon a time,"
##INT4:
"""Once upon a time, in a small town named Harmonyville, lived two best friends - Mia and Ben. They were both eight years old and loved exploring the world around them. One sunny afternoon, while playing near the park, they found a mysterious box with a note
"""
##BF16:
"""Once upon a time, in a small town named Harmonyville, there lived two best friends - Timmy the Turtle and Sally the Squirrel. They loved exploring their beautiful forest home together, discovering new things every day. One sunny afternoon, they stumbled upon a mysterious cave filled with
"""
text = "There is a girl who likes adventure,"
##INT4:
"""There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a faraway land called "The Land of the Sun." She packed her bag with everything she needed, including her favorite book about the sun.
"""
##BF16:
"""There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a beautiful country called Italy. She wanted to see all the famous landmarks and try the delicious Italian food.
"""
```
### Evaluate the model
pip3 install lm-eval==0.4.5
```bash
auto-round --model "OPEA/falcon3-10B-int4-sym-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu
```
| Metric | BF16 | INT4 |
| ------------------------- | ----------------- | ----------------- |
| Avg.13 | 0.6151 | 0.6092 |
| Avg.10 | 0.64113 | 0.63584 |
| leaderboard_mmlu_pro | 0.4238 | 0.4156 |
| leaderboard_ifeval | (0.4149+0.2939)/2 | (0.4233+0.2828)/2 |
| gsm8k(5shot) strict match | 0.8067 | 0.7923 |
| mmlu | 0.7069 | 0.6930 |
| lambada_openai | 0.6998 | 0.7025 |
| hellaswag | 0.5873 | 0.5832 |
| winogrande | 0.7380 | 0.7293 |
| piqa | 0.7884 | 0.7889 |
| truthfulqa_mc1 | 0.3427 | 0.3452 |
| openbookqa | 0.3400 | 0.3320 |
| boolq | 0.8232 | 0.8116 |
| arc_easy | 0.8312 | 0.8258 |
| arc_challenge | 0.5538 | 0.5469 |
### Generate the model
Here is the sample command to generate the model.
```bash
auto-round \
--model tiiuae/Falcon3-10B-Base \
--device 0 \
--group_size 128 \
--nsamples 512 \
--bits 4 \
--iter 1000 \
--disable_eval \
--model_dtype 'float16' \
--format 'auto_awq,auto_gptq,auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
sarathlella/dotorgpt-adapter | sarathlella | 2025-04-30T03:43:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2025-04-30T03:43:11Z | ---
base_model: microsoft/phi-2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
hxyscott/math-decontamination-4.1-mini-rank64-error_removed-7epoch | hxyscott | 2025-04-29T23:42:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:05:36Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ehab07/distilbert-rotten-tomatoes | ehab07 | 2025-04-29T23:31:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T22:19:45Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cpu
- Datasets 3.5.1
- Tokenizers 0.21.1
|
MrRobotoAI/F7 | MrRobotoAI | 2025-04-29T20:33:15Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/B7",
"base_model:merge:MrRobotoAI/B7",
"base_model:MrRobotoAI/B8",
"base_model:merge:MrRobotoAI/B8",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T11:11:04Z | ---
base_model:
- MrRobotoAI/B7
- MrRobotoAI/B8
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/B7](https://huggingface.co/MrRobotoAI/B7) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/B8](https://huggingface.co/MrRobotoAI/B8)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/B7
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/B8
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/B7
dtype: bfloat16
```
|
mih12345/carlos_30_april | mih12345 | 2025-04-29T20:22:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T20:18:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yashikam19/flan_large_model | yashikam19 | 2025-04-29T20:08:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T18:42:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tashiroksksks/evellyn2v | Tashiroksksks | 2025-04-29T18:18:47Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-29T17:46:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Jonjew/EvanRachelWood | Jonjew | 2025-04-29T17:12:05Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-04-29T17:11:54Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Evan Rachel Wood
output:
url: images/erwood.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Evan Rachel Wood
license: unknown
---
# Evan Rachel Wood by Fluximus_Maximus
<Gallery />
## Model description
FROM https://civitai.com/models/1522246/evan-rachel-wood?modelVersionId=1722297
Please support the creator by donating BUZZ to the creator and LIKING at the page above
Trigger Evan Rachel Wood
## Trigger words
You should use `Evan Rachel Wood` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/EvanRachelWood/tree/main) them in the Files & versions tab.
|
10-Paro-Aarti-Viral-Video-Original-Shoot/Original.Clip.Paro.Aarti.Viral.Video.Leaks.official | 10-Paro-Aarti-Viral-Video-Original-Shoot | 2025-04-29T16:04:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T16:04:19Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
AlanLanSS/mnem_qwen | AlanLanSS | 2025-04-29T04:15:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:20:10Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlanLanSS
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/Qwen3-32B-4bit | mlx-community | 2025-04-29T02:52:43Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-04-28T22:15:55Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-32B
---
# mlx-community/Qwen3-32B-4bit
This model [mlx-community/Qwen3-32B-4bit](https://huggingface.co/mlx-community/Qwen3-32B-4bit) was
converted to MLX format from [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-32B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
vertings6/b0a0000b-ca05-48e6-9378-49252628f65a | vertings6 | 2025-04-29T01:17:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T00:39:48Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b0a0000b-ca05-48e6-9378-49252628f65a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 09fd8de16e0ef037_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09fd8de16e0ef037_train_data.json
type:
field_input: Patient
field_instruction: Description
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/b0a0000b-ca05-48e6-9378-49252628f65a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/09fd8de16e0ef037_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e9a3f091-ac21-4461-8f15-2557f19c34f8
wandb_project: s56-32
wandb_run: your_name
wandb_runid: e9a3f091-ac21-4461-8f15-2557f19c34f8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b0a0000b-ca05-48e6-9378-49252628f65a
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1528 | 0.0066 | 200 | 2.6998 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/amoral-cogito-Zara-14B-i1-GGUF | mradermacher | 2025-04-28T23:24:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Disya/amoral-cogito-Zara-14B",
"base_model:quantized:Disya/amoral-cogito-Zara-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T17:12:50Z | ---
base_model: Disya/amoral-cogito-Zara-14B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Disya/amoral-cogito-Zara-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
BootesVoid/cm9x549d901fsvc0915q4il31_cma1b2i0o00bl12tv9kj8g3gg | BootesVoid | 2025-04-28T17:27:35Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T17:27:33Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SE24682OFAIMD
---
# Cm9X549D901Fsvc0915Q4Il31_Cma1B2I0O00Bl12Tv9Kj8G3Gg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SE24682OFAIMD` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SE24682OFAIMD",
"lora_weights": "https://huggingface.co/BootesVoid/cm9x549d901fsvc0915q4il31_cma1b2i0o00bl12tv9kj8g3gg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9x549d901fsvc0915q4il31_cma1b2i0o00bl12tv9kj8g3gg', weight_name='lora.safetensors')
image = pipeline('SE24682OFAIMD').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9x549d901fsvc0915q4il31_cma1b2i0o00bl12tv9kj8g3gg/discussions) to add images that show off what youโve made with this LoRA.
|
gxhf/vbnm | gxhf | 2025-04-28T16:14:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T16:14:55Z | ---
license: apache-2.0
---
|
Sameer2407/PriceLLaMAA-2025-04-28_07.20.50 | Sameer2407 | 2025-04-28T10:11:21Z | 0 | 1 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:ed-donner/pricer-data",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-04-28T07:25:46Z | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: PriceLLaMAA-2025-04-28_07.20.50
results: []
datasets:
- ed-donner/pricer-data
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sameer2001-poornima-university/PriceLLaMAA/runs/yswmgs4h)
# PriceLLaMAA-2025-04-28_07.20.50
This repository contains a fine-tuned LLaMa model for predicting product prices based on descriptions. It's trained using the ed-donner/pricer-data dataset and the trl library for Supervised Fine-Tuning (SFT) with LoRA (Low-Rank Adaptation).
## Model description
The base model used is meta-llama/Meta-Llama-3.1-8B. It's quantized to 4 bits using bitsandbytes for memory efficiency. The model is fine-tuned using LoRA, targeting specific layers (q_proj, v_proj, k_proj, o_proj) for efficient adaptation.
## Intended Uses
- **Price Prediction:**
The model is designed to predict or estimate the price of a product based on its textual description.
- **E-commerce Applications:**
Can be used by online sellers, marketplaces, or catalog management systems to suggest initial pricing based on product descriptions.
- **Data Augmentation:**
Helpful for generating synthetic price labels for datasets during training of other machine learning models.
- **Market Research:**
Can assist analysts in comparing how similar product descriptions could correlate with price estimates.
---
## Limitations
- **Domain-Specific:**
The model is trained primarily on e-commerce-style product descriptions. It may not perform well outside typical retail scenarios (e.g., luxury items, collectibles, services).
- **No Real-Time Market Awareness:**
The model does not have access to real-time pricing, supply-demand factors, or current market trends.
- **Approximate Predictions:**
Outputs are estimates based on learned patterns in the training data and are not guaranteed to be accurate for production financial decisions.
- **Bias from Training Data:**
If the training dataset contains biases (e.g., certain product categories being overpriced/underpriced), the model may inherit those biases.
- **Language and Format Sensitivity:**
Descriptions that are extremely short, poorly written, or in languages/formats very different from the training data may yield poor predictions.
---
## Training Details
- *Dataset:* ed-donner/pricer-data
- *Base Model:* meta-llama/Meta-Llama-3.1-8B
- *Quantization:* 4-bit NF4
- *Fine-tuning Method:* LoRA with SFT
- *Library:* trl
- *Hyperparameters:* See the training script in the repository for detailed hyperparameter values.
## Training Procedure
The model was fine-tuned using **Supervised Fine-Tuning (SFT)** combined with **LoRA** for parameter-efficient adaptation. The base model `meta-llama/Meta-Llama-3.1-8B` was loaded in 4-bit precision to optimize memory usage.
The training steps were:
1. **Model Preparation:**
- Loaded the base model (`Meta-Llama-3.1-8B`) in 4-bit NF4 quantization using `bitsandbytes`.
- Applied a LoRA configuration targeting the following modules:
- `q_proj`
- `k_proj`
- `v_proj`
- `o_proj`
2. **Dataset:**
- Used the `ed-donner/pricer-data` dataset, which consists of product descriptions and corresponding prices.
3. **Training Setup:**
- Fine-tuned using the `trl` library's SFTTrainer.
- Optimizer: `PagedAdamW` with betas=(0.9, 0.999) and epsilon=1e-08.
- Learning rate scheduler: Cosine decay schedule with 3% warmup ratio.
- Random seed: 42 for reproducibility.
4. **Hyperparameters:**
- Learning Rate: 1e-4
- Training Batch Size: 2
- Evaluation Batch Size: 1
- Number of Epochs: 1
5. **Monitoring:**
- Tracked training loss and evaluation metrics using Weights & Biases (wandb).
6. **Saving:**
- Only the LoRA adapters were saved, keeping the base model frozen to ensure lightweight deployment.
The entire training was optimized for fast prototyping and low GPU memory usage without sacrificing too much performance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
## Demo Usage
You can use the model for inference like this:
```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
import torch
# Load the base model (Meta-Llama-3.1-8B)
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-8B")
# Load the fine-tuned model with PEFT
model_name = "Sameer2407/PriceLLaMAA-2025-04-28_07.20.50" # Replace with your model path
model = PeftModel.from_pretrained(base_model, model_name)
# Load the tokenizer
tokenizer = base_model.get_tokenizer()
# Define a product description
product_description = "A sleek, modern stainless steel electric kettle with 1.5-liter capacity and auto shut-off feature."
# Prepare input
inputs = tokenizer(f"Predict the price: {product_description}", return_tensors="pt").to(model.device)
# Generate output
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=50)
# Decode and print the predicted price
predicted_price = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(predicted_price)
|
yashikam19/fine-tuned-flan | yashikam19 | 2025-04-27T10:22:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T10:22:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits